About Alin Panaitiu

Professional developer with 9+ years of experience in creating robust services with Python, Go, Crystal

The RSS's url is : https://alinpanaitiu.com/index.xml

Please copy to your reader or subscribe it with :

Preview of RSS feed of Alin Panaitiu

Woodworking as an escape from the absurdity of software

2024-04-28 18:01:45

Some of you might remember the legendary comment of Eric Diven on a Docker CLI issue he opened years ago:

@solvaholic: Sorry I missed your comment of many months ago. I no longer build software; I now make furniture out of wood. The hours are long, the pay sucks, and there’s always the opportunity to remove my finger with a table saw, but nobody asks me if I can add an RSS feed to a DBMS, so there’s that :-)

I say legendary because it has over 9000 reactions and most are positive. There’s a reason why so many devs resonate with that comment.

A lot of us said at some point things like “I’m gonna throw my laptop out the window and start a farm”. Even my last team leader sent me a message out of the blue saying “I think I’ll run a bar. I want to be a bartender and listen to other people’s stories, not figure out why protobuf doesn’t deserialize data that worked JUST FINE for the past three years”.

You know the drill, sometimes the world of software development feels so absurd that you just want to buy a hundred alpaca and sell some wool socks and forget about solving conflicts in package.json for the rest of your life.

I went through those stages too: when the Agile meetings at my last job got so absurd that we were being asked to estimate JIRA task time in T-shirt sizes, I took the decision to quit that comfy well paying job for the uncertainty of making a living from macOS apps. I had only one app that didn’t even work on the latest Apple Silicon chips, and it was making $0, so I really took a bet with it.

Recently, when people started coming with so many unrealistic and absurd expectations and demands about what my apps should do, I started thinking if it would be possible to leave software development for a more physical trade.

A bit of history

Most of my pre-college time was spent on things I didn’t want to do.

I had a bit of childhood, but then I started going to school 6 hours per day, with 1-2 hours spent on commute after 5th grade. I only liked the 10-minute breaks between classes where I played basketball or practiced parkour.

Every day after I came back from school, I had to work in agriculture, either out in the field with crazy winds and sun and UV radiation, or inside a 100-meter long greenhouse where it’s either a 50°C sauna or a muddy rainforest. I was very bad at every job I was given, but it’s what my parents did for a living and I had to help them, no questions asked.

The few hours that remained, usually very late at night, tired both physically and mentally, I spent practicing acoustic guitar, doing bodybuilding exercises, writing bad poetry or drawing graphite portraits.

me, ages ago, playing a classical guitar on someone's old couch
me, ages ago, playing a classical guitar on someone's old couch

I almost never did homework or memorize whatever had to be memorized for the next day of school. I just couldn’t justify spending those few hours I had left on even more stuff I did not want to do.

When I found my liberty in college, hundreds of kilometers away from my parents, it’s like something clicked. I suddenly became incapable of doing work that I found meaningless.

Failing classes became acceptable, quitting jobs was something I did with little remorse if I felt I wasn’t helping anyone with the work I was assigned, and bureaucracy became a disease I had to avoid at all costs.

I still washed the dishes though. Cleaning and other “chores” never felt meaningless for some reason.

The first wood thing I did

… was a chess board and piece set. With magnets inside them. Where the pieces look nothing like ordinary chess pieces.

chess board, first iteration
chess board, first iteration

I was trying to get the pieces to snap into place in a satisfying way, and make sure the game stays that way when kids or dogs inevitably bump the table where the board sits.

You know how Magnus Carlsen always adjusts his pieces so meticulously before a game? Well I have half of that obsession as well so I wanted to avoid doing that.

Magnus Carlsen adjusting his pieces before a game
pawn snapping into its square because of the magnet inside

How it was done

I started with a cheap but hefty pine board which I rounded with a lot of sandpaper. Then I asked my wife to help me colour in the darker squares because I’m pretty bad at colouring inside the edges (both literally and figuratively). We used some wood floor markers for that and the colour seems to be holding well.

Most chess board builds you see on YouTube are done by gluing squares of different wood species with alternating colors, but I had neither the skill nor the tools to do that.

Then I drilled holes for the super strong neodymium magnets from the underside of the board, having to get really close to the top side without passing through. I failed on two squares, but some wood putty took care of that.

sculpting chess pieces with my dremel on the balcony
sculpting chess pieces with my dremel on the balcony

I spent a few sunny days on the balcony sculpting the pieces with a badly sharpened knife and my Dremel. This was quite satisfying, there’s something really nice about seeing a non-descript rectangle take the shape of a little horse in your hands. I mean knight, but in Romanian that piece is called “horse”, and I really don’t see any knight there.

chess board, start to finish
chess board, start to finish

Regarding the design, I got some inspiration after seeing these modernist chess sets, which not only looked beautiful in my eyes, but also had these geometric shapes that didn’t need that much sculpting to replicate. I found ready-to-buy spheres and cubes of wood at a craft shop around me (which took care of pawns and rooks), and the rest were carved out of rectangles and cones of wood.

Modernist chess set designs
Modernist chess set designs

Kaval

Two Octobers ago, a Romanian music band called Subcarpați was holding a free “make a Kaval with your own hands” course, where a flute artisan taught the basics of his trade for a week.

The Kaval or “caval” is a long flute with 5 holes and a distinct lower register where notes can sound melancholic and coming from far away, as opposed to the thin cheerful sound of the small shepherd flute.

Kaval sample in G minor

Ever since I bought my first Kaval, I always wanted to learn how to build one myself. It’s one of those trades where there’s very little info on the internet, so it feels almost mystical compared to what I’m used to in programming. I would also have the chance to walk home with the finished flute, so of course I went to the course.

Making my own Kaval, in B minor
Making my own Kaval, in B minor

I loved the fact that we worked in teams of two, and that everything had to be done by hand with no power tools. Even the long bore through the 70cm branch of elder tree had to be done with a hand drill, taking turns to rest our hands.

The artisan had been a shepherd himself since childhood, and taught himself with a lot of trial and error about how to build good sounding flutes and how to make the holes so that the flute stays in tune. But he didn’t know why the holes should be at those specific distances or why the wood tube should be of that specific length for each scale.

I wanted to know those things, because I had an idea of making a universal Kaval that can play in any scale.

You see, if you want to play on top of songs in various scales, you need a Kaval made for each specific scale. So you’ll need an A minor flute, and a B minor one and a C minor one and so on, for a total of 12 different flute lengths.

I eventually found info on how a flute works by thinking about it as an open or closed tube where the vibrating air creates nodes and antinodes that should coincide with the hole position. At the moment I’m still studying this and working towards my “universal flute” goal.

What does this have to do with software?

For the past 10 years I lived in rented apartments, usually at the 3rd or 4th story with no access to a courtyard. I was never able to get used to that, given that all my childhood I lived and played in a 2000m² courtyard, on a road where there were more slow horse carriages than noisy cars.

This year I moved into a rented house with a tiny but welcoming garden and a bit of paved court and only now I notice the effect this has had on my mind and behaviour.

I develop macOS apps for a living, and there are some unhealthy things in this field that piled up over the years. I get a lot of messages in a demanding and negative tone, and because walking outside the apartment meant unbearable car noise, obnoxious smells and zero privacy, I always defaulted to simply acting on the feedback, putting up with it and working long hours into the night, instead of going for a walk to calm down.

A few months ago, the most absurd demands started coming up for my apps: things like “why does your app not control the volume of my <weird sound device>? why don’t you just do it, people pay you for it” when the app in question is Lunar, an app for controlling monitor brightness, not sound devices.

Or “why do you disable your apps from working on Windows?”, or “make Clop compress text and copy it to clipboard” (where Clop is my app that automatically compresses copied images, videos and PDFs, I have no idea what compressing text even means in that context).

But this time, I was able to simply walk out the front door, grab a branch of beech wood, and, because I remembered my wife saying we forgot to package the french rolling pin when moving, I took out my pocket knife and started carving a simple rolling pin for her. It was so liberating to be able to just ignore those messages for a while and do something with my hands.

the rolling pin is such a simple tool and to this day, my wife still tells me how much she likes it because it's exactly the right length and thickness for making her tasty egg noodles.. and best of all, it was free
the rolling pin is such a simple tool and to this day, my wife still tells me how much she likes it because it's exactly the right length and thickness for making her tasty egg noodles.. and best of all, it was free

I understand that those people don’t know better, and they would have no idea that there’s no checkbox where you can choose whether an app works on macOS, Windows or Linux. I understand how if the app does something with audio volume or compression, some think that it should do everything related to those workloads, even if it’s completely outside the scope of the app.

But the combination of the negative tone and getting message after message, some people being so persistent that they insist on sending me those messages through all possible mediums (email, Discord, Twitter, contact form, they’ll find me everywhere), makes it hard to just ignore them.

There’s also this oily smell of AI and machine learning in the tech atmosphere, where I no longer feel relevant and I seem to have stopped caring about new tech when I noticed that 8 in 10 articles are about some new LLM or image generation model. I guess I like the smell of wood better.

Side tangent on privileges of being a software dev

I know I’m privileged to even be able to have the choice of what to do with my time. I got lucky when I chose a computer science university at the right time which allowed me to progress towards a huge semi-passive income in the last 10 years. that doesn’t mean I didn’t work my ass off, but luck plays a huge role too

I got “lucky” to have my mind traumatised into some kind of OCD-like state where I hate leaving a thing unfinished. So I plow through exhaustion, skip meals, miss house chores and annoy dear people around me because I know “I just need to fix this little thing” and I’ll finish this app/feature/task I started. Even though I also know there’s no real deadline and I can leave it half-finished for now.

But even if it sounds annoying for a person like me to whine about how I don’t feel good or I feel burnt out, the privilege doesn’t negate the feelings. The regression to the norm will make everyone, rich or poor, get used to the status quo and complain about every thing that’s just a little worse than their current state. That’s happiness and sadness in a nutshell.

I’m also vaguely aware that software dev as we know it is about to disappear soon, and I got tired of learning the newest thing just to have it replaced next year. I got tired of back pain and chronic finger pain from so many hours of sitting and typing, I’d rather have pain from work that also builds some muscle.

And I got so tired of everything being online, immaterial, ephemeral and lonely, like indie development tends to be.


Woodworking with cheap tools and free wood

This house we rented is small and the owners had to fit the bedroom upstairs. I really don’t like climbing stairs up and down, especially when I have to let my dog out three times per night. So we gave up a room and started furnishing our own bedroom downstairs.

I didn’t want to buy bedside tables for the price of the bed itself, so I thought I could maybe make by own. I’m not yet skilled enough to build my own bed though, so we bought that

One day, while walking with my dog, I noticed that some trees were getting trimmed in the vicinity of our house and there were a lot of white birch branches on the side of the road. I said why not?, grabbed some branches and walked like a lunatic with white long sticks dangling up and down and a black dog zig-zagging left and right, all the way home.

I had another small pine panel left from that chess project so I started thinking about the simplest way to turn what I have into a bedside table.

pine board with birch branches
pine board with birch branches

I used low-grit sandpaper to give the board some nice round corners because I love squircles, swallowed about a spoonful of sawdust because I couldn’t find any breathing mask left, criss-crossed 4 branches in a way that would give a stable base, and screwed them to the underside of the board with long wood screws.

The legs would wobble around though, so I drilled small 3mm holes into each branch where they met in the middle, and weaved a florist wire through them to keep the table steady.

Bedside table, made out of pine with birch legs
Bedside table, made out of pine with birch legs

The laptop bed table

After I’ve shown the bedside table to a friend of mine, he said he also needed a laptop table for those mornings when he’d rather not get out of bed. I wanted to say that’s not very healthy, but what got out instead was sure thing, I’ll do it!. Oh well..

I still had the large desk top I glued from smaller beech boards, on which I worked for the past 4 years. It stayed unused currently, so I cut part of it and built this cute thing:

cute but heavy laptop table, made out of glued beech wood
cute but heavy laptop table, made out of glued beech wood

You’ll notice three defining features that every laptop table should have:

To tell the truth, all those are side effects of me drilling holes where there should be no hole, and dropping the board on the ground multiple times because my workbench was not large enough. All the things that could go wrong, went wrong with this table.

I hid the defects by turning them into features.

The whole truth actually is that the table looks nothing like what I planned. I bought these nice hidden brass cylindrical hinges to make the table foldable. That way, you could fold the sides flat inside and use it as some kind of armchair desk if you wanted.

Brass hinges
Brass hinges

I wasn’t able to drill the correctly sized or positioned holes for the hinges because I still lack a lot of knowledge and skill in working with wood. So after losing my temper with the frickin’ hinges that still didn’t fit after a full day of drilling and chiseling, I glued the sides and inserted 2 trusty long wood screws per side, which I patched with a glue gun that made the screw holes look like eyes.

After I also carved the handles, the table grew kind of a personality of its own, as you can see above.


Why didn’t I do some wood joints, like a dovetail instead of ugly screws and glue?

Because I had no idea they existed. Also, I wasn’t able to fit a simple hinge, I would have probably never finished this table if I tried learning wood joinery on it.

This reminds me of how whenever I did pair programming with a colleague, I noticed how they were doing some “nonoptimal” action and I would say:

Why don’t you just use ripgrep instead of sifting through all these files?

Because they don’t know it exists, stupid. Or because they just want to get this thing done and move on, they don’t grep files all day like you do.

But in my ignorance, I seem to have chosen a good enough joining method. As you can see in this wood joinery comparison, 5cm (2inch) screws can hold more than 50kg (110lbs) of force, and I used even longer screws so I think it’s going to hold a 3kg laptop just fine.


Oh right, forgot about this little detail.. I also added a cork pocket for holding a notebook, tablet, phone etc. which I lined with a microfiber cloth on the inside for strength and sewn to the wood with that leftover alpaca wool for style.

Cork pocket sewn to the table side
Cork pocket sewn to the table side

The bookshelf without books

Large bookshelf (200x120x40 cm), made out of pine boards
Large bookshelf (200x120x40 cm), made out of pine boards

While we were stuck in the apartment in the 2020 pandemic, me and my wife bought a lot of stuff that we thought would help us learn new things and start new hobbies. I thought I’m going to build smart LED lighting all my life and my wife would become a professional wool knitter. We were losing our minds, for sure.

So now we were stuck with crates of stuff we haven’t used in years, and didn’t want to start unpacking them around the house. The clutter that followed after the pandemic, tired our minds just as much as the lockdown itself.

We dumped the crates on an unused stairway spot, and I thought that a bookshelf as large as that spot would clear the clutter.

Before: clutter | After: organized clutter
Before: clutter | After: organized clutter

But I could not find any bookshelf that large, certainly not for cheap. So I traced a few lines in Freeform, took some measurements, and ordered a bunch of large pine boards and a ton of long screws.

I also ordered the cheapest portable workbench I could find ($30) that had a vise, so I can stop making sawdust inside.

A few days later, I got to sawing the shelves with my cheap Japanese pull saw I bought from Lidl years ago.

Hint: Hand sawing a long wood board with no skill will certainly end up with a crooked edge. Stacking up 5 boards one on top of the other will still end up crooked.

Uhm, I guess the hint is, buy a track saw, or make sure the crooked edge isn’t visible.

making the bookshelf
making the bookshelf

My wife helped a lot with measuring and figuring out where to drill holes and place the screws, while my dog inspected the work regularly to make sure the defects were hidden correctly.

It took two days of screwing.. erm.. driving screws, I mean. But in the end we got the result we wanted! And I got sores in my right arm for days, driving those long screws is harder than I thought.

The desk that became a workbench

In the thumbnail of this post you can see the current “workbench” I use, which is basically that $30 vise workbench I bought for the bookshelf, with the top of my previous “coding desk” attached in the front.

my current workbench
my current workbench

In the image you can see (bottom-left to top, then right):

I also own some no-name chisels that work well enough and some card scrapers that I still struggle sharpening.

The only power tools I have are a Makita drill and a random orbit sander on which I did spend some money, an old circular saw I found in that old shed (it was good enough to cut miters on that laptop table) and a Dremel I use rarely because I don’t like its power cord. I prefer battery powered tools.

The window bench

Our dog Cora loves sitting at the window, growling at old people and barking at children passing around. Yeah, she’s terrified of children for some reason.

But the window sill is not wide enough and her leg kept falling with a “clang” on the radiator below. So I widened it by placing two glued up boards of pine on top of the radiator, that I planed and smoothed beforehand.

Cora sitting at the window
Cora sitting at the window
Cora at the window, with the widened sill
Cora at the window, with the widened sill

This is when I learned that a hand plane is not some antique tool that nobody uses anymore, but a quite versatile piece that can easily smoothen grain where I would waste 5 sheets of sandpaper and choke on sawdust.

I had to still let the heat radiate somehow, so I drilled large holes with a forstner bit, but I also blew the grain fibers on the underside because I had no idea of this possible problem. Turns out there is a simple solution to drilling large holes without ripping the fibers:

  1. Drill a small 3-6mm hole in the center with a normal wood drill, all the way to the other side (this will help see where the forstner bit should be place from both sides)
  2. Place the forstner bit in the hole (this also helps with keeping the bit centered) and drill the large hole, stopping midway through the board
  3. Turn the board around and repeat step 2 until you meet the other end of the hole

We also wanted to sit with Cora and there was not much space between the bed and the radiator, so I built a narrow bench. I used another two pine boards of the same size, but this time glued them on the side to create a wider board.

For the legs, well the tree trimming continued near us, so one day I found some thick cherry branches which I brought home, scraped the bark from them, then attached them to the bench using screws from the top side.

I was ok with a rustic look so I didn’t spend much on finishing, patching holes, or even proper wood drying. I did use the hand plane to chamfer the edges though, I love taking those thin continuous wood shavings from the edge.

Window bench, in the morning sun
Window bench, in the morning sun

The trunk coffee table

Coffee table made out of a beech log
Coffee table made out of a beech log

We recently visited my parents, and loved how the grass finally started growing in some spots where their house and court renovation was finished and was no longer spewing cement dust. It was an abnormally sunny April and I wanted to chat with them at a coffee outside in the early morning before they started the field work, but there was nowhere to place the coffee outside.

First world problems right? If you read about The tail end, you might already understand why a trivial thing like coffee time with my parents feels so important to me.

So one day, while walking on a gravel road near their house, I noticed one neighbour had these huge logs of beech that were recently cut. I thought that would be easy to turn into a small exterior coffee table, so I went to ask if I could buy one.

Well I kind of had to yell “HELLO!” at their gate because I didn’t know their name, and did that a few times until a seemingly sleepy old man appeared at the front door (it was 5 PM) asking what I want. I asked how much he’d want for one of those logs, but he just said to get one, no money needed. Ok, there’s no point in insisting, I chose a wide enough but not too wide log, because these things are heavy and I wasn’t sure I could lift it, and rolled it slowly back home.

I didn’t have my usual tools at my parents house, so I improvised. I found a battered cleaver which my dad used for chopping kindling for the barbecue. I sharpened it as well as I could, then used a hammer to roll a burr on the back of the cleaver that I could use for scraping.

Scraping the bark off the beech log

Beech wood has such a smooth hard wood under the bark that it didn’t even need sanding. I used my dad’s power planer to smooth out the top and make a quasi-flat surface then finished it with some walnut oil and it was (almost) ready!

Because the wood was so green, it was certain that it will crack and roughen as it dried. So I cut a groove and wrapped a flat iron band around the top to keep it from moving too much. The bottom can expand as much as it wants, I’m actually quite curious to watch the table morph throughout the summer as we use it.

The orchard bench

Bench made from reclaimed wood, for my parents-in-law orchard
Bench made from reclaimed wood, for my parents-in-law orchard

Because we were born in villages that aren’t that far apart, me and my wife always visit both our parents in the same trip. This time when I got to my parents-in-law, I took a stroll through their little orchard. They added new trees this year! I can’t wait to taste the large apricots.

What struck me as odd about the orchard was that there was no patch of grass to lay on. They like digging up the soil every year, and leaving it like that: an arid looking patch of land made of dry dirt boulders. I thought a bench would be a good solution and what do you know, there was an old broken door thrown in the firewood pile just outside the orchard, that had the perfect length and width for a bench.
I forgot to take a photo of the door, but it looked kind of like this one, only worse and with a large rhomboid ◊ hole at the top.

old broken wooden door
old broken wooden door

I got to work immediately, dismantling the door piece by piece and pulling out nail after nail (they really liked their nails in those old times). I was left with two long and narrow wooden boards, a pile of rotten wood and two pocketfuls of rusted nails.

I sawed the broken ends of the boards, then I used my father-in-law’s power planer to remove the old gray wood from the top, bottom and sides to get to the fresh wood below. There were a lot of holes and valleys so I had to scrape them by hand with sandpaper rolled around a screwdriver. This took a few more days than I expected, but I eventually got two cleanish boards of.. fir? pine? No idea.

I used a velcro sandpaper attachment for the battery powered drill to sand out the rotten sides and give the boards a curvy and smooth live edge.

Curvy live edge of the bench
Curvy live edge of the bench

For the legs, I stole some more firewood from their pile, where I found some thick branches of unidentified species that were roughly the same length. Stripping the bark with an axe made them look good enough so I screwed them at the four corners of the board. The bench was wobbly with just the legs, so I strengthened it sideways by adding shorter and thinner branches of more unidentified wood between the legs and the center of the board.

I had to do something with the rhomboid ◊ hole, so I filled it with a square 4-by-4 salvaged from a recently dismantled shed, and now the bench has 5 legs. Instead of sawing the leg to size, I left it protruding above the bench and placed another thick salvaged board on top of it to serve as an arm rest, or coffee table, or a place for the bowl of cherries.

For the finish, I burned the bench and the bottom of the legs to get a honey-brown aspect and to make it water resistant. I put a very thin layer of whatever wood lacquer I found in my in-laws shed, just for resistance because I don’t like glossy wood.

Side photo of the bench for a better view of the legs
Side photo of the bench for a better view of the legs

Other small wood things

Water glass shelf

We don’t have much space on the current eating table, so I built a two-shelf stand where we can place the always present water filter jug and the glasses and free up some of the center space.

It’s incredible how strong just a few screws can be.

Table shelf for holding water filter and glasses
Table shelf for holding water filter and glasses

Kaval stand

I thought I should finally do something about the kavals always rolling around on some table or couch throughout the house, so I made a stand from long thin wood boards glued on the side, and finished it with sunflower oil to give it a golden/orange colour.

This way I can always expand it by adding more boards to the side if I want to add more flutes.

Stand for holding my kaval collection
Stand for holding my kaval collection

Sharpening block

I need to sharpen blades almost daily, be it the pocket knife, axe, plane blade or chisels. So I made a custom sharpening block with the perfect tools for my sharpening technique.

Sharpening block, diamond plate with leather strop on a beech base

It has a $5 diamond plate with 600 grit on one side and a $5 leather strop (a piece of leather belt might work just as well) on the other side. I attached the leather with two small screws at the top so I can take it out easily if I need a flexible strop for my carving gouge for example. It is loaded with diamond paste which can be found for cheap at gemstone cutting online stores (the knife-specific pastes are a lot more expensive and I’m not sure why).

To be honest, a $0.5 green compound (chromium oxide) works just as well for stropping, that’s what I used before and still use for my detail carving knives. It gives a smoother edge than the diamond, the disadvantage being that it needs to be re-applied more often on the leather and that you need a bit more blade passes to get the same result. The diamonds seem be cutting faster, but really not much faster.

A bit of a tangent on the sharpening topic

I went through all the phases with sharpening tools. I’ve used water stones, natural stones, ceramic stones, pull-through carbide sharpeners (don’t use these), powered belt sharpeners, wheel sharpeners.

Aside from the pull-through sharpeners and the steel rods, all the others work just as well with the right technique. I settled on the diamond plate because they’re cheap, stay flat, need zero maintenance, and can cut through any type of metal. Paired with a leather strop, for me it’s the simplest way to sharpen.

I recommend this OUTDOORS55 video for a no-bullshit sharpening tutorial and the Science of Sharp blog if you’re curious what the different sharpening techniques do to an edge under a microscope.

The complex simplicity of my static websites

2023-08-08 15:14:26

It was the spring of 2014, over 9 years ago, just 6 months into my first year of college, when my Computer Architecture teacher stopped in the middle of an assembly exercise to tell us that Bitdefender is hiring juniors for Malware Researcher positions.

I had no idea what that is, but boy, did it sound cool?…

I fondly remember how at that time we weren’t chasing high salaries and filtering jobs by programming languages and frameworks. We just wanted to learn something.

As students, we needed money as well of course, but when I got the job for 1750 lei (~€350), I suddenly became the richest 18 year old in my home town, so it wasn’t the top priority.

And we learnt so much in 2 years.. obscure things like AOP, a lot of x86 assembly, reverse engineering techniques which dumped us head first into languages like Java, .NET, ActionScript? (malware authors were creative).

But most of all, we did tons of Python scripting, and we loved every minute of it. It was my first time getting acquainted with fast tools like Sublime Text and FAR Manager. Coming from Notepad++ and Windows Explorer, I felt like a mad hacker with the world at my fingertips.

I’m known as a macOS app dev nowadays, but 9 years ago, I actually started by writing petty Python scripts which spurred the obsessive love I have nowadays for clean accolade-free code and indentation based languages.

What does all that have to do with static websites though?

Pythonic HTML

Well, 5 years ago, when I launched my first macOS app, I found myself needing to create a simple webpage to showcase the app and at the very least, provide a way to download it.

And HTML I did not want to write. The XML like syntax is something I always dreaded, so overfilled with unnecessary </> symbols that make both writing and reading much more cumbersome. I wanted Python syntax for HTML so I went looking for it.

I went through pug

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
doctype html
html
  head
    title Lunar - The defacto app for controlling monitor brightness
    meta(itemprop='description' content='...')
    style.
      a.button {
        background: bisque;
        padding: 0.5rem 1rem;
        color: black;
        border-radius: 0.5rem;
      }
      body {
        display: flex;
        flex-direction: column;
        align-items: center;
        justify-content: center;
        text-align: center;
      }
  body
    h1(style='color: white; font: bold 3rem monospace') Lunar
    img(src='https://files.lunar.fyi/display-page.png' style='width: 80%')
    a.button(href='https://files.lunar.fyi/releases/Lunar.dmg') Download

pretty, but still needs () for attributes, and I still need accolades in CSS and JS

then haml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
!!!
%html{lang: "en"}
  %head
    %meta{content: "text/html; charset=UTF-8", "http-equiv" => "Content-Type"}/
    %title Lunar - The defacto app for controlling monitor brightness
    %meta{content: "...", itemprop: "description"}/
    :css
      a.button {
        background: bisque;
        padding: 0.5rem 1rem;
        color: black;
        border-radius: 0.5rem;
      }
      body {
        display: flex;
        flex-direction: column;
        align-items: center;
        justify-content: center;
        text-align: center;
      }
  %body{style: "background: #2e2431; min-height: 90vh"}
    %h1{style: "color: white; font: bold 3rem monospace"} Lunar
    %img{src: "https://files.lunar.fyi/display-page.png", style: "width: 80%"}/
    %a.button{href: "https://files.lunar.fyi/releases/Lunar.dmg"} Download

even more symbols: %, :, => and / for self-closing tags

…and eventually stumbled upon Slim and its Python counterpart: Plim

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
doctype html
html lang="en"
  head
    title Lunar - The defacto app for controlling monitor brightness
    meta itemprop="description" content="..."
    -stylus
      a.button
        background bisque
        padding 0.5rem 1rem
        color black
        border-radius 0.5rem
      body
        display flex
        flex-direction column
        align-items center
        justify-content center
        text-align center

  body style="background: #2e2431; min-height: 90vh"
    h1 style="color: white; font: bold 3rem monospace" Lunar
    img src="https://files.lunar.fyi/display-page.png" style="width: 80%"
    a.button href="https://files.lunar.fyi/releases/Lunar.dmg" Download

ahhh.. so clean!

Here’s how that example looks like if I would have to write it as HTML:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
<!DOCTYPE html>
<html>
    <head>
        <title>Lunar - The defacto app for controlling monitor brightness</title>
        <meta itemprop="description" content="...">
        <style>
            a.button {
                background: bisque;
                padding: 0.5rem 1rem;
                color: black;
                border-radius: 0.5rem;
            }

            body {
                display: flex;
                flex-direction: column;
                align-items: center;
                justify-content: center;
                text-align: center;
            }
        </style>
    </head>

    <body>
        <h1 style="color: white; font: bold 3rem monospace">Lunar</h1>
        <img src="https://files.lunar.fyi/display-page.png" style="width: 80%">
        <a class="button" href="https://files.lunar.fyi/releases/Lunar.dmg">Download</a>
    </body>
</html>

not particulary hard to read, but writing would need a lot of Shift-holding and repeating tags

The thing I like most about Plim, and why I stuck with it, is that it can parse my other favorite symbol-hating languages without additional configuration:

Here’s a more complex example to showcase the above features (might require sunglasses):

example of writing a HDR page section, similar to the one on lunar.fyi

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---!
  # use Python to generate the dynamic image sizes for the srcset attr

  WIDTHS = [1920, 1280, 1024, 768, 640, 320]

  def srcset(image, ext, page_fraction=1.0):
    return ','.join(
      f'/img/{image}/{width}.{ext} {width // page_fraction:.0f}w'
      for width in WIDTHS
    )

doctype html
html lang="en"
  head
    -stylus
      # use Stylus to do a readable media query that checks for wide color gamut

      @media screen and (color-gamut: p3)
        @supports (-webkit-backdrop-filter: brightness(1.5))
          section#xdr
            -webkit-backdrop-filter: brightness(1)
            filter: brightness(1.5)

  body
    section#xdr
      picture
        source type="image/webp" srcset=${srcset('xdr', 'webp', 0.3)}
        source type="image/png" srcset=${srcset('xdr', 'png', 0.3)}

      -md
        # write markdown that renders as inline HTML

        Unlock the full brightness of your XDR display

        The **2021 MacBook Pro** and the **Pro Display XDR** feature an incredibly bright panel *(1600 nits!)*,
        but which is locked by macOS to a third of its potential *(500 nits...)*.

        Lunar can **remove the brightness lock** and allow you to increase the brightness past that limit.

    -coffee
      # use CoffeeScript to detect if the browser might not support HDR

      $ = document.querySelector
      safari = /^((?!chrome|android).)*safari/i.test navigator.userAgent

      window.onload = () ->
        if not safari
          $('#xdr')?.style.filter = "none"

And best of all, there is no crazy toolchain, bundler or dependency hell involved. No project structure needed, no configuration files. I can just write a contact.plim file, compile it with plimc to a readable contact.html and have a /contact page ready!

So that’s how it went with my app: I wrote a simple index.plim, dropped it on Netlify and went on with my day.

Complexity Cost

Complex simplicity

The app managed to get quite a bit of attention, and while I kept developing it, for the next 4 years the website remained the same heading - image - download button single page. It was only a side project after all.

Working for US companies from Romania made good money, but it was so tiring to get through 3h of video meetings daily, standups, syntax nitpicking in PR review, SCRUM bullshit, JIRA, task writing, task assigning, estimating task time in T-shirt sizes??

In April 2021 I finally got tired of writing useless code and selling my time like it was some grain silo I could always fill back up with even more work…

I bet on developing my app further. Since my college days, I always chose the work that helps me learn new concepts. At some point I had to understand that I learnt enough and had to start sharing. This time I really wanted to write software that helped people, and was willing to spend my savings on it.

Comically Stuffed Stylesheets

A more complete app also required a more complete presentation website, but the styling was getting out of hand. You would think that with flexbox and grids, you can just write vanilla CSS these days, but just adding a bit of variation requires constant jumping between the CSS and HTML files.

A presentation page is usually only 10% HTML markup. The rest is a ton of styling and copy text, so I wanted to optimize my dev experience for that.

There’s no “go to definition” on HTML .classes or #ids because their styles can be defined ✨anywhere✨. So you have to Cmd-F like a madman and be very rigorous on your CSS structure.

The controversial but very clever solution to this was Tailwind CSS: a large collection of short predefined classes that mostly style just the property they hint at.

For example in the first code block I had to write a non-reusable 5-line style to center the body contents.

1
2
3
4
5
6
7
body {
  display: flex;
  flex-direction: column;
  align-items: center;
  justify-content: center;
  text-align: center;
}

With Tailwind, I would have written the body tag like so:

1
body.flex.flex-col.justify-center.items-center.text-center

That might not seem like much, some would argue that it’s even a lot less readable than the CSS one. Can’t I just define a .center class that I can reuse?

Well, think about a few things:

Sure, long lines of classes might not be so readable, but neither are long files of CSS styling. At least the Tailwind classes are right there at your fingertips, and you can replace a -lg with a -xl to quickly fine tune your style.

Complexity Cost:

Responsive images

So many people obsess over the size of their JS or CSS, but fail to realize that the bulk of their page is unnecessarily large and not well compressed images.

Of course, I was one of those people.

For years, my app’s website had a screenshot of its window as an uncompressed PNG, loading slowly from top to bottom and chugging the user’s bandwidth.

Old Lunar website
Old Lunar website

I had no idea, but screenshots and screen recordings are most of the time up to 10x larger than their visually indistinguishable compressed counterparts.

I even wrote an app to fix that since I’m constantly sending screenshots to people and was tired of waiting for 5MB images to upload in rapid chats.

It’s called Clop if you want to check it out.

Yes, just like that famous ransomware, it wasn’t that famous at the time of naming the app.

I needed a lot more images to showcase the features of an app controlling monitor brightness and colors, so I had to improve on this.

Delivering the smallest image necessary to the user is quite a complex endeavour:

  1. Optimize the image using ImageOptim
  2. Resize it to fit multiple screen sizes using vipsthumbnail
  3. Figure out what fraction of the page width will be occupied by the image
  4. Write a suitable srcset attribute to load the suitable image
  5. Optional: convert the image to formats like webp, avif or JPEG XL for smallest file size

I did so much of that work manually in the past… thankfully nowadays I have imgproxy to do the encoding, optimization and resizing for me.

I just have to write the srcset, for which I defined Plim and Python functions to do the string wrangling for me.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
-def image(img, ext='png', factor=0.4, mobile_factor=1)
    picture
        -call img=${img} ext=${ext} factor=${factor} mobile_factor=${mobile_factor} self:sources
        img srcset=${srcset(img, ext, factor)}

-def sources(img, ext='png', factor=0.4, mobile_factor=1)
    source type="image/avif" srcset=${srcset(img, ext, mobile_factor, convert_to="avif")} media="(max-width: 767px)"
    source type="image/avif" srcset=${srcset(img, ext, factor, convert_to="avif")} media="(min-width: 768px)"

    source type="image/webp" srcset=${srcset(img, ext, mobile_factor, convert_to="webp")} media="(max-width: 767px)"
    source type="image/webp" srcset=${srcset(img, ext, factor, convert_to="webp")} media="(min-width: 768px)"

    source type="image/${ext}" srcset=${srcset(img, ext, mobile_factor)} media="(max-width: 767px)"
    source type="image/${ext}" srcset=${srcset(img, ext, factor)} media="(min-width: 768px)"
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
WIDTHS = [1920, 1280, 1024, 768, 640, 320]

def imgurl(image, width, ext="png", convert_to=""):
    conversion = f"@{convert_to}" if convert_to else ""
    return f"https://img.panaitiu.com/_/{width}/plain/https://lunar.fyi/img/{urllib.parse.quote(image)}.{ext}{conversion}"


def srcset(image, ext="png", factor=1.0, convert_to=""):
    return ",".join(
        f"{imgurl(image, width, ext, convert_to)} {width // factor:.0f}w"
        for width in WIDTHS
    )

Complexity Cost

Hot reloading

After 2 weeks of editing the page, Cmd-Tab to the browser, Cmd-R to refresh, I got really tired of this routine.

I worked with Next.js before on Noiseblend and loved how each file change automatically gets refreshed in the browser. Instantly and in-place as well, not a full page refresh. I got the same experience when I worked with React Native.

There should be something for static pages too, I thought. Well it turns out there is, it’s called LiveReload and I had to slap my forehead for not searching for it sooner.

After installing the browser extension, and running the livereloadx --static file watcher, I got my hot reloading dev experience back.

Actually now that I think about it, Hugo has super fast hot reloading, how does it accomplish that? Yep, turns out Hugo uses LiveReload as well.

Complexity Cost

Contact pages

After releasing the new app version, many things were broken, expectedly.

People tried to reach me in so many ways: Github issues, personal email, through the app licensing provider, even Facebook Messenger. I had no idea that including an official way of contact would be so vital.

And I had no idea how to even do it. A contact form needs, like, a server to POST to, right? And that server needs to notify me in some way, and then I have to respond to the user in some other way… sigh

I thought about those chat bubbles that a lot of sites have, but I used them on Noiseblend and did not like the experience. Plus I dislike seeing them myself, they’re an eyesore and a nuisance obscuring page content and possibly violating privacy.

After long searches, not sure why it took so long, I stumbled upon Formspark: a service that gives you a link to POST your form to, and they send you an email with the form contents. The email will contain the user email in ReplyTo so I can just reply normally from my mail client.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
form action="https://submit-form.com/some-random-id"
    label for="name" Name
    input#from hidden="true" name="_email.from" type="text"
    input#name name="name" placeholder="John Doe" required="" type="text"

    label for="email" Email
    input#email name="email" placeholder="[email protected]" required="" type="email"

    label for="subject" Subject
    input#email-subject hidden="true" name="_email.subject" type="text"
    input#subject name="subject" placeholder="What's this message about?" required="" type="text"

    label for="message" Message
    textarea#message name="message" placeholder="Something about our apps perhaps" required="" type="text" rows="6"

-coffee
    # Custom subject and name: https://documentation.formspark.io/customization/notification-email.html#subject

    nameInput = document.getElementById("name")
    fromInput = document.getElementById("from")
    nameInput.addEventListener 'input', (event) -> fromInput.value = event.target.value

    subjectInput = document.getElementById("subject")
    emailSubjectInput = document.getElementById("email-subject")
    subjectInput.addEventListener 'input', (event) -> emailSubjectInput.value = event.target.value

Complexity Cost

None, I guess. I just hope that the prolific but unique Formspark dev doesn’t die or get kidnapped or something.

And you call this “simple”?

It’s not. Really. It’s crazy what I had to go through to get to a productive setup that fits my needs.

One could say I could have spent all that time on writing vanilla HTML, CSS and JS and I would have had the same result in the same amount of time. I agree, if time would be all that mattered.

But for some people (like me), feeling productive, seeing how easy it is to test my ideas and how code seems to flow from my fingertips at the speed of thought, is what decides if I’ll ever finish and publish something, or if I’ll lose my patience and fallback to comfort zones.

Having to write the same boilerplate code over and over again, constant context switching between files, jumping back into a project after a few days and not knowing where everything was in those thousand-lines files.. these are all detractors that will eventually make me say ”f••k this! at least my day job brings money”.

Reusability

So many JS frameworks were created in the name of reusable components, but they all failed for me.

I mean sure, I can “npm install” a React calendar, and I am now “reusing” and not “reimplementing” the hard work of someone better than me at calendar UIs. But just try to stray away a little from the happy path that the component creator envisioned, and you will find it is mind-bendingly hard to bend the component to your specific needs.

You might raise a Github issue and the creator will add a few params so you can customize that specific thing, but so will others with different and maybe clashing needs. Soon enough, that component is declared unwieldy and too complex to use, the dev will say “f••k this! I’d rather do furniture” and someone else will come out and say: here’s the next best thing in React calendar libraries, so much simpler to use than those behemoths!

I never had this goal in mind but unexpectedly, the above setup is generic enough that I was able to extract it into a set of files for starting a new website. I can now duplicate that folder and start changing site-specific bits to get a new website.

Here are the websites I’ve done using this method:

And the best thing I remember is that for each website I published a working version, good looking enough, with a contact page and small bandwidth requirements, in less than a day.

How does this solve the problem of straying away from the happy path? Well, this is not an immutable library residing in node_modules, or a JS script on a CDN. It is a set of files I can modify to the site’s needs.

There is no high wall to jump (having to fork a library, figuring out its unique build system etc.) or need to stick to a specific structure. Once the folder is duplicated, it has its own life.

For those interested, here is the repo containing the current state of my templates: github.com/alin23/plim-website

I don’t recommend using it, it’s possible that I’m the only one who finds it simple because I know what went into it. But if you do, I’d love to hear your thoughts.

Gatsby? Jekyll? Hugo?

Weirdly, this website I’m writing on is not made with Plim. At some point I decided to start a personal website, and I thought it probably needs a blog-aware site builder.

At the time, I didn’t know that RSS is an easily templatable XML file, and that all I need for a blog is to write Markdown.

I remember trying Gatsby and not liking the JS ecosystem around it. Jekyll was my second choice with Github Pages, but I think I fumbled too much with ruby and bundle to get it working and lost patience.

Both problems stemmed from my lack of familiarity with their ecosystems, but my goal was to write a blog, not learn Ruby and JS.

Hugo seemed much simpler, and it was also written in Go and distributed as a standalone binary, which I always like for my tools.

I marveled at Hugo’s speed, loved the fact that it supports theming (although it’s not as simple as it sounds) and that it has a lot of useful stuff built-in like syntax highlighting, image processing, RSS generator etc. But it took me sooo long to understand its structure.

There are many foreign words (to me) in Hugo: archetypes, taxonomies, shortcodes, partials, layouts, categories, series. Unfortunately, by the time I realized that I don’t need the flexibility that this structure provides, I had already finished this website and written my first article.

I also used a theme that uses the Tachyons CSS framework, for which I can never remember the right class to use. I thought about rewriting the website in Plim but converting everything to Tailwind or simple CSS would have been a lot of work.

I eventually started writing simple Markdown files for my notes, and have Caddy convert and serve them on the fly. Helps me write from my phone and not have to deal with Git and Hugo.

I still keep this for longform content, where a laptop is usually needed anyway.

Reverse engineering the MacBook clamshell mode

2023-01-17 21:16:13

You just got a large, Ultrawide monitor for your MacBook. You hook it up and marvel at the amount of pixels.

You notice you never use the MacBook built-in display anymore, and it nags you to have it in your lower peripheral vision.

Closing the lid is not an option because you still use the keyboard and trackpad, maybe even the webcam and TouchID from time to time. So you try things:

Why isn’t there a way to actually disable this screen?

BlackOut

Because a lot of users of my 🌕 Lunar app told me about their grievances with not being able to turn off individual displays in software, I went down the rabbit hole of display mirroring and automated all of the above.

Lunar interface showing the BlackOut function
Lunar interface showing the BlackOut function

Now someone can turn off and on any display at will using keyboard shortcuts, and can even automate the above MacBook + monitor workflow to trigger when an external monitor gets connected and disconnected.

But it’s still nagging me that somehow macOS can actually disable the internal screen completely, but we’re stuck with this zero-brightness-mirroring abomination.

Clamshell Mode

When closing the MacBook lid while a monitor is still connected, the internal screen disappears from the screen list and the external monitors remain available.

This function is called clamshell mode in the laptop world. Congratulations, your $3000 all-in-one computer is now just an SoC with some USB-C ports. Ok, you also get the speakers and the inefficient cooling system.

MacBook with lid closed sitting vertically on a wooden stand
MacBook with lid closed sitting vertically on a wooden stand

In the pre-chunky-MacBook-Pro-with-notch era, the lid was detected as being closed using magnets in the lid, and some hall effect sensors. So you were able to trick macOS into thinking the lid was closed by simply placing two powerful magnets at its sides.

With the new 2021 design, the MacBook has a hinge sensor, that can detect not only if the lid is closed, but also the angle of its closing. Magnets can’t trick’em anymore.

But all these sensors will probably just trigger some event in software, where a handler will decide if the display should be disabled or not, and call some disableScreenInClamshellMode function.

So where is that function, and can we call it ourselves?

The software side

Since Apple Silicon, most userspace code lives in a single file called a DYLD Shared Cache. Since Ventura, that is located in a Cryptex (a read-only volume) at the following path:

/System/Cryptexes/OS/System/Library/dyld/dyld_shared_cache_arm64e

Since that file is mostly an optimised concatenation of macOS Frameworks, we can extract the binaries using keith/dyld-shared-cache-extractor:

1
2
mkdir -p ~/Temp/dyld && cd ~/Temp/dyld
dyld-shared-cache-extractor /System/Cryptexes/OS/System/Library/dyld/dyld_shared_cache_arm64e $PWD

Let’s extract the exported and unexported symbols in text format to be able to search them easily using something like ripgrep.

I’m using /usr/bin/nm with fd’s -x option to take advantage of parallelisation. I like its syntax more than parallel’s since it has integrated interpolation for the basename/dirname of the argument (note the {/})

1
2
3
4
5
6
7
8
mkdir symbols private-symbols

fd --maxdepth 1 -t f \
    . ./System/Library/*Frameworks/*.framework/Versions/A/ \
    -x sh -c 'nm --demangle --defined-only --extern-only {} > symbols/{/}'
fd --maxdepth 1 -t f \
    . ./System/Library/*Frameworks/*.framework/Versions/A/ \
    -x sh -c 'nm --demangle --defined-only {} > private-symbols/{/}'

Searching for clamshell gives us interesting results. The most notable is this one inside SkyLight:

1
2
3
4
~/Temp/dyld ❯ rg -i clamshell

symbols/SkyLight
1710:00000001d44bce70 S _kSLSDisplayControlRequestClamshellState

SkyLight.framework is what handles window and display management in macOS and it usually exports enough symbols that we can use from Swift so I’m inclined to follow this path.

Let’s see if the internet has anything for us. I usually search for code on SourceGraph as it has indexed some large macOS repos with dyld dumps. Looking for RequestClamshellState gives us something far more interesting though:

searching RequestClamshellState on SourceGraph
searching RequestClamshellState on SourceGraph

Looks like Apple open sourced the power management code, nice! It even has recent ARM64 code in there, are we that lucky?

Here’s an excerpt of something relevant to our cause:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
SLSDisplayPowerControlClient *gSLPowerClient = nil;

enum {
    kPMClamshellOpen          = 1,
    kPMClamshellClosed        = 2,
    kPMClamshellUnknown       = 3,
    kPMClamshellDoesNotExist  = 4
};

void handleSkylightCheckIn(void)
{
// ...
    // create ws power control client
    NSError *err = nil;
    gSLPowerClient = [[SLSDisplayPowerControlClient alloc] initAsyncPowerControlClient:&err notifyQueue:_getPMMainQueue() notificationType:kSLDCNotificationTypeNone notificationBlock:^(void *dict) {
        if (dict != nil) {
            handleSkylightNotification(dict);
        } else {
            ERROR_LOG("Received a nil dictionary from WindowServer callback");
        }
    }];
// ...
}

void requestClamshellState(SLSClamshellState state)
{
    /* Forward clamshell state to WindowServer
     A) a request with a clamshell state of close in interpreted as a turn off clamshell display (clamshell close)
     B) a request with a clamshell state of open in interpreted as a turn on internal and ANY external displays (clamshell open)
     */

    if (!gSLCheckIn) {
        ERROR_LOG("WindowServer has not checked in. Refusing to change clamshell display state");
        return;
    }

    NSError *err = nil;
    NSMutableDictionary *request = [[NSMutableDictionary alloc] initWithCapacity:1];
    NSNumber *ns_state = [[NSNumber alloc] initWithUnsignedChar:state];
    [request setValue:ns_state forKey:kSLSDisplayControlRequestClamshellState];
    SLSDisplayControlRequestUUID uuid = [gSLPowerClient requestStateChange:(NSDictionary *const)request error:&err];
    if ([err code] != 0) {
       ERROR_LOG("Clamshell requestStateChange returned error %{public}@", err);
    } else {
       INFO_LOG("requestClamshellState: state %u, Received uuid %llu", state, uuid);
        struct request_entry *entry = (struct request_entry *)malloc(sizeof(struct request_entry));
        entry->uuid = uuid;
        entry->valid = true;
        STAILQ_INSERT_TAIL(&gRequestUUIDs, entry, entries);
    }
    if (request) {
       [ns_state release];
       [request release];
    }
    if (err) {
        [err release];
    }
}

So it’s instantiating an SLSDisplayPowerControlClient then calling its requestStateChange method. SLS is a prefix related to SkyLight (probably standing for SkyLightServer), let’s see if we have that code in our version of the framework.

I prefer to do that using Hopper and its Read File From DYLD Cache feature which can extract a framework from the currently in-use cache:

Hopper menu item showing Read file from DYLD cache
Hopper menu item showing Read file from DYLD cache
Hopper showing SLSDisplayPowerControlClient
Hopper showing SLSDisplayPowerControlClient

Ok the class and methods are there, let’s look for what uses them. Since it’s most likely a daemon handling power management, I’ll look for it in /System/Library.

And looks like powerd is what we’re looking for, containing exactly the code that we saw on SourceGraph.

1
2
3
4
5
❯ rg -uuu requestClamshellState /System/Library/ 2>/dev/null

/System/Library/CoreServices/powerd.bundle/powerd: binary file matches (found "\0" byte around offset 4)

❯ hopperv4 -e /System/Library/CoreServices/powerd.bundle/powerd
Hopper pseudocode calling requestStateChange
Hopper pseudocode calling requestStateChange

Writing the code

To link and use SLSDisplayPowerControlClient we need some headers, as Swift doesn’t have the method signatures available.

Looking for SLSDisplayPowerControlClient on SourceGraph gives us more than we need.

Let’s create a bridging header so that Swift can link to Objective-C symbols, and a Swift file to where we’ll try to replicate what powerd does.

1
2
mkdir clamshell && cd clamshell
touch Bridging-Header.h Clamshell.swift
Bridging-Header.h
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
#import <Foundation/Foundation.h>

@interface SLSDisplayPowerControlClient {}

- (id)initAsyncPowerControlClient:(id*)arg1 notifyQueue:(id)arg2 notificationType:(UInt8)arg3 notificationBlock:(void (^)(NSDictionary*))notificationBlock;
- (id)initPowerControlClient:(id*)arg1 notifyQueue:(id)arg2 notificationType:(UInt8)arg3 notificationBlock:(void (^)(NSDictionary*))notificationBlock;

- (unsigned long long)requestStateChange:(id)arg1 error:(id*)arg2;
@end

extern NSString* kSLSDisplayControlRequestClamshellState;
UInt8 kSLDCNotificationTypeNone = 0;
Clamshell.swift
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
import Foundation

enum ClamshellState: Int {
    case open = 1
    case closed = 2
    case unknown = 3
    case doesNotExist = 4
}

var err: AnyObject?
let skyLightPowerClient = SLSDisplayPowerControlClient(powerControlClient: &err, notifyQueue: DispatchQueue.main, notificationType: kSLDCNotificationTypeNone) { dict in
    print(dict as Any)
}

func requestClamshellState(_ state: ClamshellState) {
    // Send the request
    let request: [AnyHashable: Any] = [
        kSLSDisplayControlRequestClamshellState: NSNumber(value: state.rawValue)
    ]

    var err: AnyObject?
    let uuid = skyLightPowerClient!.requestStateChange(request, error: &err)

    // Check the response
    if (err as! NSError?)?.code != 0 {
        print("Clamshell requestStateChange returned error", err?.localizedDescription ?? "")
    } else {
        print("requestClamshellState: state %u, Received uuid %llu", state, uuid)
    }
}

print(skyLightPowerClient!)
requestClamshellState(.closed)

Compiling…

To compile the binary using swiftc we have to point it to the location of SkyLight.framework which is located at /System/Library/PrivateFrameworks.

We then tell it to link the framework using -framework SkyLight and import our bridging header. Then we run the resulting binary.

I prefer to run this using entr to watch the files for changes. With the code editor on the left and the terminal on the right, I can iterate and try things faster by just editing and saving the file, then watch the output on the right.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
swiftc \
    -F/System/Library/PrivateFrameworks \
    -framework SkyLight \
    -import-objc-header Bridging-Header.h \
    Clamshell.swift -o Clamshell
./Clamshell

# For faster iteration, watch file changes with entr:

echo Clamshell.swift Bridging-Header.h | entr -rs '\
	swiftc -F/System/Library/PrivateFrameworks \
		-framework SkyLight \
		-import-objc-header Bridging-Header.h \
		 Clamshell.swift -o Clamshell \
	&& ./Clamshell'

Well.. it’s not working. The error is not helpful at all, there’s nothing on the internet related to it.

1
2
<SLSDisplayPowerControlClient: 0x600000fb8720>
Clamshell requestStateChange returned error The operation couldnt be completed. (CoreGraphicsErrorDomain error 1004.)

Looking for errors

Maybe the system log has something for us. One can check that using Console.app but I prefer looking at it in the Terminal through the /usr/bin/log utility.

1
log stream --predicate 'eventMessage contains "Clamshell"'

Something from AMFI about the binary signature. CMS stands for Cryptographic Message Syntax which is what codesign adds to a binary when it signs it with a certificate.

1
2
3
kernel: (AppleMobileFileIntegrity) AMFI: '/Users/alin/Temp/dyld/clamshell/Clamshell' has no CMS blob?
kernel: (AppleMobileFileIntegrity) AMFI: '/Users/alin/Temp/dyld/clamshell/Clamshell': Unrecoverable CT signature issue, bailing out.
tccd: [com.apple.TCC:access] AUTHREQ_ATTRIBUTION: msgID=60667.1, attribution={responsible={TCCDProcess: identifier=kitty, pid=19959, auid=501, euid=501, responsible_path=/Applications/kitty.app/Contents/MacOS/kitty, binary_path=/Applications/kitty.app/Contents/MacOS/kitty}, requesting={TCCDProcess: identifier=Clamshell, pid=60667, auid=501, euid=501, binary_path=/Users/alin/Temp/dyld/clamshell/Clamshell}, },

I have GateKeeper disabled and running the binary from a terminal that’s added to the special Developer Tools section of Security & Privacy, so this shouldn’t cause any problems.

I checked just to be sure, and signing it with my $100/year Apple Developer certificate gets rid of the CMS blob error but doesn’t change anything in the result.


Phew, let's take a break

I just arrived after a long train ride at the house I'm rebuilding with my wife, and wanted to share this nice view with you 😌

It's January, but the sun is warming our faces and the hazelnut trees are already producing their yellow catkins.

Ten years ago, the children of the house's previous owners were walking in knee deep snow and coasting downhill on their wooden sleds, hurting a few young fir trees on the way down. 🌲

Seasons are changing.

breaza sun in January

Digging deeper

Some system capabilities can only be accessed if the binary has been signed by Apple and has specific entitlements. Checking for powerd’s entitlements gives us something worrying.

The binary seems to use com.apple.private.* entitlements. This usually means that some APIs will fail if the required entitlements are not present.

1
2
3
4
5
6
7
8
9
> codesign -d --entitlements - /System/Library/CoreServices/powerd.bundle/powerd

Executable=/System/Library/CoreServices/powerd.bundle/powerd
[Dict]
	...
	[Key] com.apple.private.SkyLight.displaypowercontrol
	[Value]
		[Bool] true
	...

We can try to add the entitlements ourselves. We just need to create a plist file and use it in codesign:

Entitlements.plist
1
2
3
4
5
6
7
8
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
    <dict>
        <key>com.apple.private.SkyLight.displaypowercontrol</key>
        <true/>
    </dict>
</plist>

Sign the binary with entitlements and run it:

1
2
3
❯ codesign -fs $CODESIGN_CERT --entitlements Entitlements.plist Clamshell
❯ ./Clamshell
Job 1, './Clamshell' terminated by signal SIGKILL (Forced quit)

Looks like we’re getting killed instantly. The log stream shows AMFI is doing that because we’re not Apple and we’re not supposed to use that entitlement.

1
2
3
4
5
6
7
8
kernel: mac_vnode_check_signature: /Users/alin/Temp/dyld/clamshell/Clamshell: code signature validation failed fatally: When validating /Users/alin/Temp/dyld/clamshell/Clamshell:
  Code has restricted entitlements, but the validation of its code signature failed.
Unsatisfied Entitlements: com.apple.private.SkyLight.displaypowercontrol

kernel: (AppleSystemPolicy) ASP: Security policy would not allow process: 57234, /Users/alin/Temp/dyld/clamshell/Clamshell
amfid: /Users/alin/Temp/dyld/clamshell/Clamshell not valid: Error Domain=AppleMobileFileIntegrityError Code=-413 "No matching profile found" UserInfo={NSURL=file:///Users/alin/Temp/dyld/clamshell/Clamshell, unsatisfiedEntitlements=<CFArray 0x155e1b600 [0x1f0e613a8]>{type = immutable, count = 1, values = (
    0 : <CFString 0x155e12db0 [0x1f0e613a8]>{contents = "com.apple.private.SkyLight.displaypowercontrol"}
)}, NSLocalizedDescription=No matching profile found}

AMFI

What’s this AMFI exactly and why is it telling us what we can and cannot do on our own device?

The acronym stands for Apple Mobile File Integrity and it’s the process enforcing code signature at the system level.

By default, the OS locks these private APIs because if we would be able to use them, a malware or a bad actor would be able to do it as well. With it locked by default, malware authors are deterred from trying to use these APIs on targets of lower importance as this would usually need a 0-day exploit.

In the end it’s just another layer of security, and if in the rare case someone needs to bypass it, Apple provides a way to do it. The process involves disabling System Integrity Protection and adding amfi_get_out_of_my_way=1 as a boot arg.

1
2
3
4
5
6
7
8
9
# Inside a Recovery terminal (to disable SIP)

> csrutil disable
> reboot

# Inside a normal terminal after disabling SIP

> sudo nvram boot-args="amfi_get_out_of_my_way=1"
> sudo reboot now

I don’t recommend doing this as it puts you at great risk, since the system volume is no longer read only, and code signatures are no longer enforced.

I only keep this state for research that I do in short periods of time, then turn SIP back on for normal day to day usage.

In case you need to revert the above changes:

1
2
3
4
5
6
7
8
# Inside a normal terminal before enabling SIP

> sudo nvram boot-args=""

# Inside a Recovery terminal (to enable SIP)

> csrutil enable
> reboot

No more AMFI?

Unfortunately even after disabling AMFI, we’re still encountering the CoreGraphicsError 1004. It’s true, AMFI is not complaining about the entitlements anymore, they’re accepted and the binary is not SIGKILLed.

But we still can’t get into clamshell mode using just software.

Frida

If you haven’t heard of it, Frida is this awesome tool that lets you inject code into already running processes, hook functions by name (or even by address), observe how and when they’re called, check their arguments and even make your own calls.

Let me share with you another macOS boot arg that I like:

1
sudo nvram boot-args=-arm64e_preview_abi

This one enables code injection. Now we can use Frida to hook the SkyLight power control methods to see how they are called as we close and open the lid:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
> sudo frida-trace -t SkyLight -m '-[SLSDisplayPowerControlClient *]' powerd

// Closing the lid
           /* TID 0x5427 */
  4617 ms  SLSDisplayControlRequestClamshellStateKey: 2
  4617 ms  -[SLSDisplayPowerControlClient requestStateChange:0x13cf06a60 error:0x16b8ca828]
  4628 ms     | -[SLSDisplayPowerControlClient service]
  4628 ms     | -[SLSDisplayPowerControlClient sendStateChangeRequest:0x13cf06a60 uuid:0x16b8ca7e0]
  4628 ms     |    | -[SLSDisplayPowerControlClient service]

// Opening the lid
           /* TID 0x8a17 */
 10537 ms  SLSDisplayControlRequestClamshellStateKey: 1
 10537 ms  -[SLSDisplayPowerControlClient requestStateChange:0x13cc1e1c0 error:0x16b9567a8]
 10538 ms     | -[SLSDisplayPowerControlClient service]
 10538 ms     | -[SLSDisplayPowerControlClient sendStateChangeRequest:0x13cc1e1c0 uuid:0x16b956760]
 10538 ms     |    | -[SLSDisplayPowerControlClient service]

We got our confirmation at least. powerd is indeed calling SLSDisplayPowerControlClient.requestStateChange(2) when closing the lid.

Let’s check what happens when we try to call that method in Clamshell.swift.

We first add the line readLine(strippingNewline: true) at the top of the Clamshell.swift file to make the binary wait for us to press Enter. This is so that we have a running process that we can attach to with Frida.

1
2
3
4
5
6
7
8
> sudo frida-trace -t SkyLight -m '-[SLSDisplayPowerControlClient *]' Clamshell

           /* TID 0x103 */
  1475 ms  SLSDisplayControlRequestClamshellStateKey: 2
  1475 ms  -[SLSDisplayPowerControlClient requestStateChange:0x600001d64510 error:0x16d8d3c90]
  1479 ms     | -[SLSDisplayPowerControlClient service]
  1479 ms     | -[SLSDisplayPowerControlClient sendStateChangeRequest:0x600001d64510 uuid:0x16d8d3a10]
  1479 ms     |    | -[SLSDisplayPowerControlClient service]

Everything looks the same, seems that we’re not looking deep enough.

The request method seems to access the service property which is an SLSXPCService. XPC Services are what macOS uses for low-level interprocess communication.

A process can expose an XPC Service using a label (e.g. com.myapp.RemoteControlService) and listen to requests coming through, other processes can connect to it using the same label and send requests.

The system handles the routing part. And the authentication part.

Looks like an XPC Service can also be restricted to specific code signing requirements, is it possible that this is what we’re running into here?

Let’s trace SLSXPCService methods as well using Frida:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
> sudo frida-trace -t SkyLight -m '-[SLSDisplayPowerControlClient *]' -m '-[SLSXPCService *]' powerd

// Closing the lid while observing powerd
           /* TID 0x518b */
  3029 ms  -[SLSDisplayPowerControlClient requestStateChange:0x139621c60 error:0x16f0c2828]
  3029 ms  SLSDisplayControlRequestClamshellStateKey: 2
  3043 ms     | -[SLSDisplayPowerControlClient service]
  3043 ms     | -[SLSDisplayPowerControlClient sendStateChangeRequest:0x139621c60 uuid:0x16f0c27e0]
  3043 ms     |    | -[SLSDisplayPowerControlClient service]
  3043 ms     |    | -[SLSXPCService sendXPCDictionary:0x13a913be0]
  3043 ms     |    |    | -[SLSXPCService reinitConnection]
  3043 ms     |    |    |    | -[SLSXPCService enabled]
  3043 ms     |    |    |    | -[SLSXPCService enabled]
  3043 ms     |    |    |    | -[SLSXPCService connected]
  3043 ms     |    |    | -[SLSXPCService connection]
  3452 ms  -[SLSXPCService handleXPCEvent:0x13ad0ea40]
  3452 ms     | -[SLSXPCService enabled]
  3452 ms     | -[SLSXPCService cfStringToCStringPtr:0x1f3133020]
  3452 ms     | -[SLSXPCService connected]

> sudo frida-trace -t SkyLight -m '-[SLSDisplayPowerControlClient *]' -m '-[SLSXPCService *]' Clamshell

// Trying to send the clamshell request in software
  1435 ms  -[SLSDisplayPowerControlClient requestStateChange:0x6000014d4030 error:0x16b123c90]
  1435 ms  SLSDisplayControlRequestClamshellStateKey: 2
  1444 ms     | -[SLSDisplayPowerControlClient service]
  1444 ms     | -[SLSDisplayPowerControlClient sendStateChangeRequest:0x6000014d4030 uuid:0x16b123a10]
  1444 ms     |    | -[SLSDisplayPowerControlClient service]
  1444 ms     |    | -[SLSXPCService sendXPCDictionary:0x600003ec4000]
  1444 ms     |    |    | -[SLSXPCService reinitConnection]
  1444 ms     |    |    |    | -[SLSXPCService enabled]
  1444 ms     |    |    |    | -[SLSXPCService connected]
  1444 ms     |    |    |    | -[SLSXPCService autoreconnect]
  1444 ms     |    |    |    | -[SLSXPCService enabled]
Process terminated

// ...we're missing this stuff
//  3043 ms     |    |    | -[SLSXPCService connection]
//  3452 ms  -[SLSXPCService handleXPCEvent:0x13ad0ea40]
//  3452 ms     | -[SLSXPCService enabled]
//  3452 ms     | -[SLSXPCService cfStringToCStringPtr:0x1f3133020]
//  3452 ms     | -[SLSXPCService connected]

Great! or not?

I’m not sure if I should be happy that we found that our clamshell request doesn’t work because we don’t have an XPC connection, or if I should be worried that this means we won’t be able to make this work with SIP enabled.

I guess it’s time to go deeper to find out.

XPC Services

Now that we have access to Frida, we can use the handy xpcspy tool to sniff the XPC communication of powerd.

I’m thinking maybe we can find the endpoint name of the XPC listener and just connect to it and send a raw message directly, instead of relying on SkyLight to do that.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
> sudo xpcspy --parse powerd

// Closing the lid

xpc_connection_send_message
<OS_xpc_connection: <connection: 0x13a808820> { name = (anonymous), listener = false, pid = 30630, euid = 88, egid = 88, asid = 100014 }>
<OS_xpc_dictionary> { count = 4 contents =
    "PayloadType" => <OS_xpc_uint64: <uint64: 0x81917509705f5717>: 3>
    "Command" => <OS_xpc_uint64: <uint64: 0x81917509705f572f>: 4>
    "Payload" => <data> { length = 92 bytes, contents = {
	    SLSDisplayControlRequestClamshellStateKey = 2;
	}
    "UUID" => <OS_xpc_uint64: <uint64: 0x81917509705f5647>: 41>

}

So we have name = (anonymous), listener = false, pid = 30630.

An anonymous listener, can it get even worse? The PID coincides with WindowServer --daemon so it’s definitely the message we’re also trying to send. But with an anonymous listener, we’re stuck to relying on SkyLight’s exported code to reach it.

I guess we need to go back to do some old-school assembly reading.


Filling empty spaces

After renaming some sub-procedures in Hopper, looking at the graph reveals the different code paths that powerd and Clamshell are taking through SLSXPCService.reinitConnection.

powerd

  1. sees that the service’s enabled and connected properties are true
  2. so it gets out of reinitConnection
  3. and straight into sending the XPC dictionary through the available connection.

Clamshell

Hopper graph showing reinitConnection
Hopper graph showing reinitConnection

Adding some Memory.readPointer calls inside __handlers__/SLSXPCService/reinitConnection.js shows us what SkyLight is expecting to see at 0x20 and 0x28:

Two NSMallocBlocks right after the OS_xpc_connection and the OS_dispatch_queue_serial properties.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
  5502 ms   -[SLSXPCService reinitConnection]
  5502 ms       arg0 obj: <SLSXPCService: 0x11f50f130>

  5502 ms       Memory.readPointer 0x8 0x101

  5502 ms       Memory.readPointer 0x10 0x11f50bb10
  5502 ms       SLSXPCService at 0x10 <OS_xpc_connection: <connection: 0x11f50bb10> { name = (anonymous), listener = false, pid = 396, euid = 88, egid = 88, asid = 100014 }>

  5502 ms       Memory.readPointer 0x18 0x11df05970
  5502 ms       SLSXPCService at 0x18 <OS_dispatch_queue_serial: Power Management main queue>

  5502 ms       Memory.readPointer 0x20 0x11f50a740
  5502 ms       SLSXPCService at 0x20 <__NSMallocBlock__: 0x11f50a740>

  5502 ms       Memory.readPointer 0x28 0x11f50a770
  5502 ms       SLSXPCService at 0x28 <__NSMallocBlock__: 0x11f50a770>

Judging by the contents of SLSXPCService.h, those are the closures for errorBlock and notificationBlock:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
@interface SLSXPCService : NSObject <SLSXPCServiceProtocol> {
	char _enabled;
	char _connected;
	char _setTarget;
	char _autoreconnect;
	NSObject*<OS_xpc_object> _connection;
	NSObject*<OS_dispatch_queue> _notifyQueue;

	/* This would be 0x20 */ id _errorBlock;
	/* This would be 0x28 */ id _notificationBlock;

	/*^block*/id _clientErrorBlock;
	/*^block*/id _clientNotificationBlock;
}

I’m inching closer to the good code path but I seem to never get there.

So here’s what I did so far in Clamshell.swift before calling requestClamshellState:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
guard let service = skyLightPowerClient.service else {
    print("SLSXPCService is nil")
    exit(1)
}

service.autoreconnect = true
service.errorBlock = { err in
    print("service.errorBlock", err)
}
service.notificationBlock = { notification in
    print("service.notificationBlock", notification)
}

After calling requestClamshellState, the code crashes with SIGSEGV inside createNoSenderRecvPairWithQueue:errorHandler:eventHandler: because it branches to the 0x0 address.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
// The crashing instruction is:
ldr    x8, [x19, #0x28]

// Because the memory at [x19, #0x28] contains 0x0

// And register x19 contains:
(lldb) po $x19
<__NSMallocBlock__: 0x600000c08330>
 signature: "v8@?0"
 invoke   : 0x1a081a958 (/System/Library/PrivateFrameworks/SkyLight.framework/Versions/A/SkyLight`__75-[SLSXPCService createNoSenderRecvPairWithQueue:errorHandler:eventHandler:]_block_invoke)
 copy     : 0x1a05e0a18 (/System/Library/PrivateFrameworks/SkyLight.framework/Versions/A/SkyLight`__copy_helper_block_e8_32b40r)
 dispose  : 0x1a05e09d4 (/System/Library/PrivateFrameworks/SkyLight.framework/Versions/A/SkyLight`__destroy_helper_block_e8_32b40r)

Giving up (for now)

Unfortunately I’m a bit lost here. I’ll take a break and hope that the solution comes in a dream or on a long walk like in those mythical stories.

The article is already longer than I’d be inclined to read so if anyone reaches this point, congrats, you have the patience of a monk.

If there are better ways to approach a problem like this one, I’d be glad to hear about it through the contact form.

I’m not always happy to learn that I’ve wasted 4 days on a problem that could have been solved in a few hours with the right tools, but at least I’ll learn how not to bore people with writings on rudimentary tasks next time.

A window switcher on the Mac App Store? Is it even possible?

2022-08-03 02:01:04

Not really, no. Not without annoying workarounds and a confusing user experience.

Another email, another annoyed user: Firefox not loading websites when launched through rcmd! It works when launched from Alfred.. Please fix ASAP!! I’m gonna fix this Firefox issue once and for all!

Launch Xcode, open the rcmd icon rcmd project, check the launchApp function code, it’s just a NSWorkspace.open call on Firefox.app, what does Alfred do differently?

Disassemble Alfred.app in Hopper, look for NSWorkspace.open, of course it’s there, it’s the exact same thing.

screenshot of hopper showing where open is used in Alfred code
screenshot of hopper showing where open is used in Alfred code

Try open /Applications/Firefox.app in a terminal, it works, websites load as expected.

Breakpoint on launchApp, check the debugger again, let’s be rigorous, what am I really calling open on?

Argument is /System/Volumes/Data/Applications/Firefox.app which is just a symlink to /Applications/Firefox.app right? .. or was it the other way around? Anyway let’s just try it for the sake of it, I’m desperate.

Run open /System/Volumes/Data/Applications/Firefox.app, huh?? no websites load? THAT WAS IT?!

Add path.replacingOccurrences(of: "/System/Volumes/Data", with: ""), build, run, hold Right Command, press F, Firefox launches and holy cow everything works!!

I don’t even care why anymore, let’s just release this fix on the App Store.

And while I’m at it, why not try to add that window switching capability that people have been asking about?

I remember something about Accessibility permissions not being available in the sandbox, but I just used an App Store app that was able to request the permissions so there has to be a way, how hard could it be?

Well it turns out it’s pretty darn hard, and I’m still working on this window switching thing to this day.. sigh.. let me tell you about it.


Apps vs windows

There’s an important distinction between switching windows and switching apps on the Mac. As opposed to Microsoft Windows where you just Alt-Tab through .. well, windows, on macOS you Command Tab through apps by default. When an app with multiple windows is focused, Command backtick will cycle through the windows of that app.

keyboard with command tab and backtick keys highlighted
keyboard with command tab and backtick keys highlighted

Six years ago I was a Windows power user, and when I got my first Mac, Command Tabbing through apps felt very weird. Suddenly I was closing all windows of sublime text icon Sublime but its icon was still there in the Command Tab list, or I would minimize chrome icon Chrome and focusing its icon didn’t unminimize it. The app vs window distinction just didn’t exist in my mind.

Now, after 6 years, the macOS way feels a lot more intuitive:

Of course it might just be the power of habit, after all I was able to be just as productive with the Windows way in the past ¯\_(ツ)_/¯

Command Tab Tab Tab Tab…

The app centric approach is nice but having to switch between 10 different apps at a time gets annoying fast.

Pressing Tab 5 times in a row to get to the app I want could be categorized as a first world problem and I should just get used to it. But doing that 50 times a day and having to always visually check if I chose the right icon, tends to break my flow of thinking, and makes me get tired faster because of all the context switching.

That’s the main reason I created rcmd icon rcmd, to switch apps without thinking about switching apps.

My right thumb rests nicely on the Right Command key and I barely use that easy to reach key. So I turned it into a dedicated app switching key.

Dynamic assignments

I decided to dynamically assign each app the first letter of its name so that I don’t have to try to remember what key did I assign to Xcode?. I just hold Right Command and press X without any mental effort because I know I have no other app starting with X.

And if I forgot that Xcode icon Xcode is not already running (or if it crashes in the background like it sometimes does), rcmd launches it automatically (since I clearly wanted it running if I tried to focus it).

Static assignments

Xcode is a happy case though. I have so many apps starting with S that I decided custom assignments might be a better fit for that. I left Sublime Text for the S key since it’s my most used app, and then assigned mnemonic keys for others:

Seek and hide

Often I need to check the status of an app briefly and then get back to what I was doing. Some examples

That’s why I added the Hide action in rcmd.

Now I just hold Right Command and press K to check the kitty app icon Kitty terminal, then, without lifting any finger, press K again to hide it and get back to what I was doing.

This also allows the system to activate App Nap for the hidden app and put it into a lower energy usage state until I need it again.

Using rcmd on my MacBook Pro 14"

Is window switching even needed?

Unfortunately yes, there are many cases where an app might have a lot of windows open:

Available solutions

  1. App Expose: Command Tab allows pressing the ↓ Down Arrow key with the app icon selected, to expose all the windows of that app for visual selection.

    • It’s nice and useful for when you churn windows a lot, but it’s way too slow for cases when you mostly have the same windows open.
  2. Command backtick `: this native macOS hotkey will cycle through the windows of the current app but we’re back to square one where you have to visually analyze each window to see if you got the right one in focus.

  3. Alt-Tab: this is a really nice open source app which replicates the Microsoft Windows way of selecting windows by thumbnails.

    • It’s what I used for a long time, until I got too frustrated with the fact that all my seven Sublime Text windows look exactly the same and I have to also read the whole window title to find the one I want to focus.
  4. Contexts.co: a fuzzy searcher for window titles. I’ve used it in the past and it was definitely faster than the rest but it still required more key presses than I wanted

    • I don’t really need to search the whole window title, just the project name.
  5. Stage Manager: the new addition in macOS Ventura, which in its current state is just discoverable Spaces.

    • That’s the feeling I got from using it: stages are just like spaces, but more visible (through the left sidebar) and easier to reach for (by clicking on them or by focusing a window in a specific stage).
    • It still doesn’t provide any keyboard control and moving specific windows in and out of the stages requires too much work with the mouse.
    • At least for Spaces I had yabai to provide keyboard shortcuts for moving the current window to whatever space I wanted to.

My preferred solution: the Right Option key

It’s a sunny day in Brașov, I’m on my balcony taking in the sun, testing and perfecting XDR Brightness to make working in direct sunlight easier on my MacBook 14” while also rewriting parts of the Lunar Icon Lunar UI in SwiftUI.

Testing Auto XDR

I’ve already written a lot of SwiftUI boilerplate in my other projects, so I’m mostly copy pasting stuff between Sublime Text windows. I also have three Sublime windows with disassembled macOS private frameworks to look for the hidden functions I need to improve the XDR Brightness curve and responsiveness.

Juggling with all these windows suddenly became very frustrating.

Why can’t I focus exactly the window I want with one hotkey just like I focus apps with rcmd?

I’m probably going to have the same set of windows for the next few days, I know the names of the projects I have open in them, I could use the first letter of the project name to reference a specific window.

The Right Command key is taken, but right beside it stands another rarely used key: the Right Option key (ralt for short)

I want to be able to press ralt-r to focus the Sublime window containing the rcmd project, ralt-l to focus the Lunar project, ralt-v for the Volum project, ralt-p to get to the PrivateFrameworks folder and so on.

The plan seems simple enough:

Oh right … the sandbox

It’s not like the above hasn’t been done before, there are plenty of window switcher and snap/resize examples on macOS, some of them are even open source:

One window snapping tools is even on the App Store: Magnet

But why are there no window switchers on the App Store?

Well, for app switching, Apple provides a really nice API to enumerate and activate running apps without needing any intrusive permissions: NSRunningApplication

Finding Xcode and focusing it

1
2
3
4
5
let apps = NSWorkspace.shared.runningApplications
let xcode = apps.first { app in
    app.bundleIdentifier == "com.apple.dt.Xcode"
}
xcode?.activate()

But there’s no such thing for enumerating the windows of those running apps. All of the apps that work with app windows, need to tap into the Accessibility API, the one that gives you full access to extract and modify the contents of everything visible and invisible.

system dialog with yabai requesting Accessibility permissions
system dialog with yabai requesting Accessibility permissions

And so, window enumeration becomes possible, by fetching the array of UI elements under the AXWindows attribute of an app.

But since a window is like just any other UI element, then there’s no focus or activate method, so how do these apps manage to focus a window?

Take a look at this nice and intuitive snippet extracted from yabai:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
static void window_manager_make_key_window(ProcessSerialNumber *window_psn, uint32_t window_id)
{
    uint8_t bytes1[0xf8] = { [0x04] = 0xf8, [0x08] = 0x01, [0x3a] = 0x10 };
    uint8_t bytes2[0xf8] = { [0x04] = 0xf8, [0x08] = 0x02, [0x3a] = 0x10 };

    memcpy(bytes1 + 0x3c, &window_id, sizeof(uint32_t));
    memset(bytes1 + 0x20, 0xFF, 0x10);

    memcpy(bytes2 + 0x3c, &window_id, sizeof(uint32_t));
    memset(bytes2 + 0x20, 0xFF, 0x10);

    SLPSPostEventRecordTo(window_psn, bytes1);
    SLPSPostEventRecordTo(window_psn, bytes2);
}

Even though I knew that key window meant focused window in macOS terminology, it still took me a while to land on this code and start believing that this is really focusing a window.

In the end, what that code represents is message passing to the SkyLight private framework, the one that handles the macOS window management, Dock, Spaces and a ton of other stuff. I’m guessing someone sneaked in a VM debugger or looked through the assembly code to find the right bytes to send.

Ok, enumeration and focusing is doable, what else do we need? Right, Accessibility permissions. Here comes the biggest hurdle.

How do you escape the macOS sandbox?

You don’t.

On macOS, an app can be run:

App Store apps can only run inside the sandbox, and within that, an app can’t ask for Accessibility permissions. The API for that just throws a silent error and does nothing.

But then how does Magnet do it, and a few other apps as well like Peek or PopClip for example?

Turns out, these apps have a special exception from Apple, mostly because they were on the App Store before the sandbox has become mandatory: objective c - How to use Accessibility with sandboxed app? - Stack Overflow

I can barely get my apps to not be rejected by the App Store reviewers, I’m not going to get an exception just so that rcmd can focus specific windows. So now what?

Workarounds

I thought, if there was an app running outside the sandbox and listening for rcmd’s listWindows and focusWindow commands, I might be able to get this working.

I remembered hammerspoon icon Hammerspoon having a really complete window management support and it also being scriptable with Lua made it the perfect choice.

HTTP would probably be overkill for this, I knew Hammerspoon had an inter-process communication (IPC) API built-in so I tried to use that.

1
2
3
static NSString *portName = @"Hammerspoon";
CFMessagePortRef messagePort = CFMessagePortCreateRemote(NULL, (__bridge CFStringRef)portName);
// messagePort is NULL here

Well nope, the sandbox doesn’t allow that.

What about the hs CLI that Hammerspoon provides, I knew that you could send arbitrary IPC messages using that, right?

Nope again, any process run by a sandboxed app will inherit that sandbox limitations.

Ok fine, HTTP it is! Thankfully Hammerspoon provides an HTTP server and I just need to register a callback and make it listen on a port. Since we’ve already reached this madness, let’s go straight to websockets.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
function rcmdCallbackWS(msg)
    local params = hs.json.decode(msg)
    local response = "{}"

    if params.cmd == "listWindows" then
        response = hs.json.encode(hs.window:allWindows())
    elseif params.cmd == "focusWindow" then
        hs.window.get(params.window):focus()
    end

    return response
end

server = hs.httpserver.new(false, false)
server:setName("rcmd-hammerspoon")
server:setInterface("localhost")
server:setPort(3094)

server:websocket("/ws", rcmdCallbackWS)
server:start()

Alright, this seems to work. I can connect to the Hammerspoon websocket, get all windows, and focus windows by their IDs.

Now how do I explain to rcmd users that in order to focus windows, they need to:

Automating the workarounds

The App Store guidelines explicitly forbid an app from installing another app or binary to enhance its capabilities.

2.4.5 Apps distributed via the Mac App Store have some additional requirements to keep in mind:

(iv) They may not download or install standalone apps, kexts, additional code, or resources to add functionality or significantly change the app from what we see during the review process.

So I can’t install Hammerspoon automatically (it would be a bad idea anyway, this is malware behavior), but I can try to automate most of the stuff and present it as a 1-button install action.

So I wrote a function to download Hammerspoon.zip, unzip it in a temporary folder, move it to /Applications, write init.lua and rcmd.lua inside the ~/.hammerspoon directory, launch Hammerspoon and wait for the websocket to be available.

The user only has to click an Install window switcher button, no big deal.

Quarantine says “not so fast”

You see, when a sandboxed app downloads a file, the system automatically adds the com.apple.quarantine extended attribute to the file.

1
2
> xattr -l ~/Downloads/Hammerspoon.zip
com.apple.quarantine: 0083;62ea4f5c;Safari;3A6D521B-5E0D-4202-80C4-A5EB567DC246

This means that macOS GateKeeper will prevent you from launching any downloaded app or running any binary directly from code.

Even if the user tries to launch the downloaded app manually afterwards, it will still fail with the App can’t be opened error.

system dialog with hammerspoon not being allowed to launch because of the quarantine attribute
system dialog with hammerspoon not being allowed to launch because of the quarantine attribute

No amount of xattr -cr Hammerspoon.app will fix this if run from the sandbox.

Great. Scrap the download and install part, split the button into two buttons:

  1. Install Hammerspoon which only shows text instructions on how to download and install the app manually
  2. Install custom script which writes the Lua script files to disk
rcmd menu showing the two install buttons
rcmd menu showing the two install buttons

I’ve streamlined this process as much as the sandbox allows me, and after giving the app to some beta testers, every single one of them found it so confusing that they said they would not use it.

And who can blame them, I myself find it too convoluted whenever I test it.

So is this on the App Store?

Yes, surprisingly. It passed App Review without a single rejection.

I hid the feature behind a Try experimental window switching red button to deter support emails on the subject, but it’s there for anyone to try and use.

rcmd menu showing the try experimental window switching button
rcmd menu showing the try experimental window switching button

After the initial setup, it actually works pretty reliably, and the websocket connection to Hammerspoon is so fast that I don’t ever notice this happens over the network. It feels like a native window switcher to me.

But I wasn’t able to create a seamless experience like I did for app switching.

Oh well, at least I solved my own problem and can get back to what I was doing.

One month later.

Trying to get past the 500 nits limit of the MacBook Pro (and failing)

2022-02-05 01:26:36

Update: I finally found a way to go over the limit in Lunar v5.5.1


Exactly 3 months and a day after placing an order through a Romanian Apple reseller, I finally got my 14-inch M1 Max.

Well, actually.. I first got the wrong configuration (base model instead of CTO), had to return it to them after wasting a day on migrating my data to it, they sent my money back by mistake, had to pay them again, and after many calls and emails later the correct laptop arrived.

M1 Max MacBook Pro box
M1 Max MacBook Pro box

As soon as these devices were in the hands of users, requests started coming in for Lunar to provide an option to get past the 500 nits limit for everyday usage

Over the last week I tried my best to figure out how to do this, but it’s either impossible to raise the nits limit from userspace, or I just don’t have the necessary expertise.

I’ll share some details that I found while reverse engineering my way through the macOS part that handles brightness.

Testing the system

Playing a HDR video

I first started by playing this HDR test video (open it in latest Chrome or Safari for best results): hdr-test-pattern.webm

Which resulted in a blinding white at 1600 nits:

HDR white being whiter than the webpage white
HDR white being whiter than the webpage white

This generated the following logs in Console.app:

1
2
3
WindowServer    Display 1 setting nits to 888.889
corebrightnessd SDR - perceptual ramp clocked: 227.095169 -> 252.268112 - 49.169426% (239.142059 Nits)
WindowServer    Display 1 commitBrightness sdr: 211.603, headroom: 4.20075, ambient: 4.3396, filtered ambient: 13.6333, limit: 1600

SDR cap in normal lighting

After setting the display brightness to max, I could see in the logs that SDR (Standard Dynamic Range) was being capped at 400 nits:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
WindowServer    Display 1 setting nits to 1600
WindowServer    Display 1 setting display headroom hint to 7.56866
WindowServer    Display 1 commitBrightness sdr: 216.548, headroom: 7.38865, ambient: 4.24854, filtered ambient: 13.3472, limit: 1600
corebrightnessd PCC: Set PCC: Factor:=1.0496 CabalFactor:=0.0033 time=2.000000 Lux:=13.6080 Nits:=229.1757 result=1 error=(null)
WindowServer    Display 1 commitBrightness sdr: 301.188, headroom: 5.3123, ambient: 4.24854, filtered ambient: 13.3472, limit: 1600
WindowServer    Display 1 setting nits to 1602.03
corebrightnessd levelPercentage 0.334298, level = 4.967383 (nits/pwm), lux = 15.000000
WindowServer    Display 1 commitBrightness sdr: 301.571, headroom: -1, ambient: 4.79275, filtered ambient: 15.0569, limit: -1
WindowServer    Display 1 setting display headroom hint to 5.27556
WindowServer    Display 1 commitBrightness sdr: 321.478, headroom: 4.97701, ambient: 4.79275, filtered ambient: 15.0569, limit: 1600
WindowServer    Display 1 commitBrightness sdr: 340.675, headroom: 4.69655, ambient: 4.79275, filtered ambient: 15.0569, limit: 1600
WindowServer    Display 1 commitBrightness sdr: 377.322, headroom: 4.24041, ambient: 4.79275, filtered ambient: 15.0569, limit: 1600
corebrightnessd PCC: Set PCC: Factor:=1.0340 CabalFactor:=0.0023 time=2.000000 Lux:=15.0569 Nits:=377.3223 result=1 error=(null)
WindowServer    Display 1 setting nits to 1600
WindowServer    Display 1 setting display headroom hint to 4
WindowServer    Display 1 commitBrightness sdr: 400, headroom: -1, ambient: 4.96577, filtered ambient: 15.6004, limit: -1
HDR white and console logs side by side
HDR white and console logs side by side

SDR cap in direct sunlight

Shining a flashlight directly into the Ambient Light Sensor allowed SDR to jump up to 500 nits:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
WindowServer    Display 1 commitBrightness sdr: 400, headroom: -1, ambient: 322.204, filtered ambient: 1012.24, limit: -1
WindowServer    Display 1 commitBrightness sdr: 400.484, headroom: 1, ambient: 322.204, filtered ambient: 1012.24, limit: 400.484
WindowServer    Display 1 setting nits to 400.484
WindowServer    Display 1 commitBrightness sdr: 401.15, headroom: 1, ambient: 322.204, filtered ambient: 1012.24, limit: 401.15
WindowServer    Display 1 setting nits to 401.15
WindowServer    Display 1 commitBrightness sdr: 401.223, headroom: 1, ambient: 322.204, filtered ambient: 1012.24, limit: 401.224
WindowServer    Display 1 setting nits to 401.223
WindowServer    Display 1 commitBrightness sdr: 401.223, headroom: -1, ambient: 370.814, filtered ambient: 1164.95, limit: -1
WindowServer    Display 1 commitBrightness sdr: 401.552, headroom: 1, ambient: 370.814, filtered ambient: 1164.95, limit: 401.552
corebrightnessd PCC: Set PCC: Factor:=1.7464 CabalFactor:=0.0498 time=2.000000 Lux:=1164.9467 Nits:=401.5517 result=1 error=(null)
WindowServer    Display 1 setting nits to 401.552
WindowServer    Display 1 commitBrightness sdr: 402.219, headroom: 1, ambient: 370.814, filtered ambient: 1164.95, limit: 402.219
WindowServer    Display 1 setting nits to 402.219
WindowServer    Display 1 commitBrightness sdr: 402.885, headroom: 1, ambient: 370.814, filtered ambient: 1164.95, limit: 402.885
WindowServer    Display 1 setting nits to 402.885
... lots of similar logs ...
WindowServer    Display 1 setting nits to 495.458
WindowServer    Display 1 commitBrightness sdr: 496.125, headroom: 1, ambient: 810.176, filtered ambient: 2545.24, limit: 496.125
WindowServer    Display 1 setting nits to 496.125
WindowServer    Display 1 commitBrightness sdr: 496.791, headroom: 1, ambient: 810.176, filtered ambient: 2545.24, limit: 496.792
WindowServer    Display 1 setting nits to 496.791
WindowServer    Display 1 commitBrightness sdr: 497.458, headroom: 1, ambient: 810.176, filtered ambient: 2545.24, limit: 497.458
WindowServer    Display 1 setting nits to 497.458
WindowServer    Display 1 commitBrightness sdr: 498.125, headroom: 1, ambient: 810.176, filtered ambient: 2545.24, limit: 498.125
WindowServer    Display 1 setting nits to 498.125
WindowServer    Display 1 commitBrightness sdr: 498.791, headroom: 1, ambient: 810.176, filtered ambient: 2545.24, limit: 498.792
WindowServer    Display 1 setting nits to 498.791
WindowServer    Display 1 commitBrightness sdr: 499.458, headroom: 1, ambient: 810.176, filtered ambient: 2545.24, limit: 499.458
WindowServer    Display 1 setting nits to 499.458
WindowServer    Display 1 commitBrightness sdr: 500, headroom: 1, ambient: 810.176, filtered ambient: 2545.24, limit: 500
WindowServer    Display 1 setting nits to 500
WindowServer    Display 1 commitBrightness sdr: 500, headroom: -1, ambient: 987.858, filtered ambient: 3103.45, limit: -1

Dissecting the system

Since Big Sur, macOS transitioned from having the frameworks on the disk as separate binaries, to having a single file containing all the system libraries, called a dyld_shared_cache.

  • New in macOS Big Sur 11.0.1, the system ships with a built-in dynamic linker cache of all system-provided libraries. As part of this change, copies of dynamic libraries are no longer present on the filesystem. Code that attempts to check for dynamic library presence by looking for a file at a path or enumerating a directory will fail. Instead, check for library presence by attempting to dlopen() the path, which will correctly check for the library in the cache. (62986286)

Searching for keywords from the above logs surfaced only the dyld cache as expected.

searching for nits in system
searching for nits in system

I used dyld-shared-cache-extractor to drop the separate binaries on disk, then did another search there.

This surfaced up QuartzCore as the single place where that string could be found.

searching for nits in extracted dyld cache
searching for nits in extracted dyld cache

Trying to abuse QuartzCore

After looking through the QuartzCore binary with Ghidra and finding some iOS headers for it on limneos.net, I created a sample Swift project to try to use some of the exported functions from it: monitorpanel - main.swift

Based on some open-sourced iOS jailbreak tweaks, I noticed that developers used the CAWindowServer class to interface with the display and HID components directly. The class was available here so I tried to do the same on macOS.

Unfortunately, CAWindowServer.serverIfRunning always returns nil and while CAWindowServer.server(withOptions: nil) returns a seemingly valid server, all external displays are forcefully disconnected when that server is created.

Using the below code, I succeeded in producing the commitBrightness log line in Console, but nothing really changed.

code from main.swift

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
func setToMax(_ d: CAWindowServerDisplay) {
    d.setBrightnessLimit(1600)
    d.setHeadroom(1)
    d.maximumBrightness = 1000.0
    d.setSDRBrightness(600)
    d.maximumHDRLuminance = 1600
    d.maximumReferenceLuminance = 1600
    d.maximumSDRLuminance = 1000
    d.contrast = 1.1
    d.commitBrightness(1)
    // d.update() // segfault
}

let ws: CAWindowServer? = (CAWindowServer.server(withOptions: nil) as? CAWindowServer) // disconnects external displays
if let ws = ws,
   let displays = ws.displays as? [CAWindowServerDisplay],
   let d = displays.first(where: { $0.deviceName == "primary" })
{
    setToMax(d)
}

commitBrightness log line

1
monitorpanel    Display 1 commitBrightness sdr: 600, headroom: 1, ambient: -1, filtered ambient: -1, limit: 1600

CoreBrightness

While looking through Ghidra, I noticed that QuartzCore finally calls into CoreBrightness functions to increase the nits limit, so I took a look at the exported symbols on that binary.

Unfortunately, all the possibly useful symbols are not exported and trying to link against them would result in the undefined symbols error.

Adding the private symbols in the CoreBrightness.tbd file doesn’t help in this case.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
// Uninteresting Exported Symbols

_OBJC_CLASS_$_BrightnessSystem
_OBJC_CLASS_$_BrightnessSystemClient
_OBJC_CLASS_$_BrightnessSystemClientInternal
_OBJC_CLASS_$_CBAdaptationClient
_OBJC_CLASS_$_CBBlueLightClient
_OBJC_CLASS_$_CBClient
_OBJC_CLASS_$_CBKeyboardPreferencesManager
_OBJC_CLASS_$_CBTrueToneClient
_OBJC_CLASS_$_DisplayServicesClient
_OBJC_CLASS_$_KeyboardBrightnessClient


// Interesting Not Exported Symbols

-[CBBrightnessProxySKL brightnessNotificationRequestEDR]
-[CBBrightnessProxySKL brightnessRequestEDRHeadroom]
-[CBBrightnessProxySKL brightnessRequestRampDuration]
-[CBBrightnessProxySKL commitBrightness:]
-[CBBrightnessProxySKL initWithSLSBrightnessControl:]
-[CBBrightnessProxySKL setAmbient:]
-[CBBrightnessProxySKL setBrightnessLimit:]
-[CBBrightnessProxySKL setHeadroom:]
-[CBBrightnessProxySKL setNotificationQueue:]
-[CBBrightnessProxySKL setPotentialHeadroom:]
-[CBBrightnessProxySKL setSDRBrightness:]
-[CBBrightnessProxySKL setWhitePoint:rampDuration:error:]
-[CBBrightnessProxySKL unregisterNotificationBlocks]
-[CBDisplayModuleSKL configureEDRSecPerStop]
-[CBDisplayModuleSKL configurePCCDefaults]
-[CBDisplayModuleSKL getBrightnessLimit]
-[CBDisplayModuleSKL getDynamicSliderAdjustedNits:]
-[CBDisplayModuleSKL getDynamicSliderAdjustedSDRNits]
-[CBDisplayModuleSKL getLinearBrightnessForNits:]
-[CBDisplayModuleSKL getLinearBrightness]
-[CBDisplayModuleSKL getMaxNitsAdjusted]
-[CBDisplayModuleSKL getMaxNitsEDR]
-[CBDisplayModuleSKL getMaxPanelNits]
-[CBDisplayModuleSKL getNitsForLinearBrightness:]
-[CBDisplayModuleSKL getNitsForUserBrightness:]
-[CBDisplayModuleSKL getPerceptualBrightness]
-[CBDisplayModuleSKL getSDRBrightnessCurrent]
-[CBDisplayModuleSKL getSDRBrightnessTarget:]
-[CBDisplayModuleSKL getSDRNitsCapped]
-[CBDisplayModuleSKL getUserBrightnessForNits:]
-[CBDisplayModuleSKL getUserBrightnessSloperExtended]
-[CBDisplayModuleSKL getUserBrightness]
-[CBDisplayModuleSKL handleBrightnessCapOverride:]
-[CBDisplayModuleSKL initialiseEDR]
-[CBDisplayModuleSKL initialiseSDR]
-[CBDisplayModuleSKL luminanceToPerceptual:]
-[CBDisplayModuleSKL panelMaxNitsOverride:]
-[CBDisplayModuleSKL perceptualToLuminance:]
-[CBDisplayModuleSKL rampDynamicSlider:withLength:]
-[CBDisplayModuleSKL rampEDRHedroom:withLength:]
-[CBDisplayModuleSKL rampFactor:withLength:]
-[CBDisplayModuleSKL rampManagerUpdateHandling]
-[CBDisplayModuleSKL rampNitsCap:]
-[CBDisplayModuleSKL rampSDRBrightness:withLength:properties:]
-[CBDisplayModuleSKL requestEDRHeadroomImmediate:]
-[CBDisplayModuleSKL requestEDRHeadroomTransition:withLength:]
-[CBDisplayModuleSKL requestEDRHeadroomTransitionStop]
-[CBDisplayModuleSKL requestFactorImmediate:]
-[CBDisplayModuleSKL requestFactorTransition:withLength:]
-[CBDisplayModuleSKL requestFactorTransitionStop]
-[CBDisplayModuleSKL requestSDRBrightnessTransition:]
-[CBDisplayModuleSKL requestSDRBrightnessTransition:withLength:properties:]
-[CBDisplayModuleSKL requestSDRBrightnessTransitionStop]
-[CBDisplayModuleSKL supportsDynamicSlider]
-[CBDisplayModuleSKL supportsEDR]
-[CBDisplayModuleSKL supportsSDRBrightness]
-[CBDisplayModuleSKL updateAmbient]
-[CBDisplayModuleSKL updateAutoBrightnessState:]
-[CBDisplayModuleSKL updateBrightnessState]
-[CBDisplayModuleSKL updateContrastEnhancerState:]
-[CBDisplayModuleSKL updateDynamicSliderAmbient]
-[CBDisplayModuleSKL updateDynamicSliderAutoBrightness]
-[CBDisplayModuleSKL updateDynamicSliderChargerState]
-[CBDisplayModuleSKL updateDynamicSliderScaler:]
-[CBDisplayModuleSKL updateEDRAmbient]
-[CBDisplayModuleSKL updateSDRBrightness:]
-[CBDisplayModuleSKL updateSDRNits:]
-[CBEDR appliedCompensation]
-[CBEDR availableHeadroom]
-[CBEDR brightnessCap]
-[CBEDR cappedHeadroomFromUncapped:]
-[CBEDR copyStatusInfo]
-[CBEDR description]
-[CBEDR initWithRampPolicy:potentialHeadroom:andReferenceHeadroom:]
-[CBEDR maxHeadroom]
-[CBEDR panelMax]
-[CBEDR referenceHeadroom]
-[CBEDR sanityCheck]
-[CBEDR sdrBrightness]
-[CBEDR secondsPerStop]
-[CBEDR setAppliedCompensation:]
-[CBEDR setBrightnessCap:]
-[CBEDR setPanelMax:]
-[CBEDR setSdrBrightness:]
-[CBEDR setSecondsPerStop:]
-[CBEDR shouldUpdateEDRForRequestedHeadroom:targetHeadroom:rampTime:]
-[CBEDR stopsFromHeadroomRatio:]
-[CBNVRAM backlightNitsDefault]
-[CBNVRAM backlightNitsMax]
-[CBNVRAM backlightNitsMin]
-[CBNVRAM dealloc]
-[CBNVRAM init]
-[CBNVRAM readBacklightNits]
-[CBNVRAM setBacklightNitsMax:]
-[CBNVRAM writeBacklightNits:]

SkyLight

I knew from previous work on window management that the SkyLight framework is closely related to the WindowServer so I took a look at that too.

SkyLight exports a lot of symbols, and fortunately I had a good example on how to use them inside yabai, a macOS window manager similar to i3 and bspwm.

But again, nothing useful is exported.

Searching for nits in SkyLight
Searching for nits in SkyLight

The function kSLSBrightnessRequestEDRHeadroom seemed promising but I always got a SIGBUS when trying to call it. I can’t find its implementation so I don’t know what parameters I should pass. I just guessed the first one could be a display ID.

As one Hacker News user pointed out, kSLSBrightnessRequestEDRHeadroom is actually a constant. And of course it is! It has the usual k prefix.. how did I miss that?

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
@import Darwin;
@import Foundation;

// clang -fmodules -F/System/Library/PrivateFrameworks -framework SkyLight -o headroom headroom.m && ./headroom

extern int SLSMainConnectionID(void);
extern CFTypeRef SLSDisplayGetCurrentHeadroom(int did);

const int MAIN_DISPLAY_ID = 1;

int main(int argc, char** argv)
{
    int cid = SLSMainConnectionID();
    NSLog(@"SLSMainConnectionID: %d", cid);

    CFTypeRef headroom = SLSDisplayGetCurrentHeadroom(MAIN_DISPLAY_ID);
    NSLog(@"SLSMainConnectionID: %@", headroom);

    return 0;
}

Other ideas

Streaming to a dummy

While discussing this matter with István Tóth, the developer of BetterDummy, he came up with an interesting idea.

  1. Create a CGVirtualDisplay with the same size as the built-in display
  2. Tone map the SDR contents of the built-in display to 1000nits HDR video
  3. CGDisplayStream that video to the virtual display
  4. Move the virtual display to the built-in display coordinates and use that as the main display

The streaming part already works in the latest Beta of BetterDummy and seems pretty fast as well. But adding tone mapping might cause this to be too resource intensive to be used.

Using private symbols

I think linking can be done against private symbols using memory offsets, I remember doing something like that 8 years ago at BitDefender, while trying to use the unexported _decrypt and _generate_domain methods of some DGA malware.

But the dyld_shared_cache model of macOS is something new to me and I don’t have enough knowledge to be able to do that right now.

If someone has any idea how this can be achieved, I’d be glad if you could send me a hint through the Contact page.

Why aren't the most useful Mac apps on the App Store?

2021-12-04 01:28:39

Let’s set the stage first. So, it’s Tuesday night and I’m Command Tab-ing my way through 10 different apps, some with 3-4 windows, while trying to patch bugs in Lunar Icon Lunar faster than the users can submit the reports. I’m definitely failing.

I feel my brain pulsing and my ring finger going numb on the Tab key. I stop switching apps and just stare at the Xcode window, containing what I knew was Swift code but looked like gibberish right now.

“Feels like burnout” I think. Wasn’t that what I ran away from when I quit my job to make apps for a living?


I heard a joke recently:

Show joke Didn't want a 9 to 5 job, now I work 24/7

It’s probably only funny for a small group of workaholics, but the reality of those words struck me in the middle of the hysterical laughter I was trying to stop.

Why am I still developing this app?

Why am I adding all the features the users are asking for, then deal with the flood of frustrated emails saying “what an overcomplicated stupid app, I just want to make my screen brighter!!”, then try to hide advanced features to make it simpler, then get assaulted with the confused “I can’t change volume anymore fix this ASAP!!!” because UI changes can very easily introduce bugs by simply forgetting to bind a slider to a value, then get back to scotch taping broken parts slower than the users can report them?

Those features should have probably been their own independent app.

I start to feel my fingers again, press Command Tab once more, and while looking at the list of app icons I realise something.

Maybe pressing Tab 4-5 times while visually assessing if the selected app icon is the one I want to focus, isn’t the best solution for this kind of workflow.

So what does my brain do when I feel burnt out? Gives me ideas for even more apps…

rcmd

That’s how the idea of rcmd Icon rcmd began. We have two Command keys on a Mac keyboard, and the right hand side one is almost never used. What if I use it exclusively for switching apps?

rcmd app screenshot
rcmd app screenshot

When I used Windows for reverse engineering malware, I liked switching apps using Win + Number where the number meant the position of the app icon in the taskbar. I didn’t like counting apps however.

Using the app name felt the most natural. I remembered using Contexts for a while, which provides a Spotlight like search bar for fuzzy searching your running apps. But that needed a bit more key presses than I wanted (that is 1) and more attention than I wanted to give (which is none).

My idea sounded a bit simpler: Right Command + the first letter of the app name

So simple that people were offended by it…

hacker news comment screenshot where someone is offended by the price

I pitched this idea to Ovidiu Rusu, a very good friend of mine, who surprisingly seemed to have the same need as me. We created the first prototype in about a week (icons and graphics take so much time…) and started using it in our day to day work to see if it made sense.

In less than a day, rcmd became so ingrained in our app switching that we got incredibly annoyed when we had to quit the app for recompiling and debugging. We just kept pressing Right Command X and staring at the screen like complete idiots, not understanding why Xcode wasn’t being focused.


What most people overlook when they have a simple idea is that 80% of the effort goes into handling edge cases that are not visible in the original idea.

Just for this simple app we had to solve the following problems:

This last question is what led me to write this article. It turned out we needed to do quite a few hacks if we wanted to publish this app in the App Store.

The Sandbox

Every app that is submitted to the App Store must be compiled to run within a sandbox. This means that the app will run in a container which will have the same structure as your home directory, but with mostly empty folders.
The sandbox also limits what APIs you can use, and which system components you can communicate with.

The defacto way of reacting to Right Command + some other key is to monitor all key events (yes, just like a keylogger), and discard events that don’t contain the Right Command modifier flag.

1
2
3
4
5
6
7
8
9
public extension NSEvent.ModifierFlags {
    static let rightCommand = NSEvent.ModifierFlags(rawValue: UInt(NX_DEVICERCMDKEYMASK))
}

NSEvent.addGlobalMonitorForEvents(matching: .keyDown) { event in
    guard event.modifierFlags.contains(.rightCommand) else { return }

    // do your thing
}

Easy peasy, right? Well no, because that’s not allowed on the App Store.

To use that API you need to first request Accessibility Permissions from the user. Those permissions are prohibited inside the Sandbox, because with those permissions, an app would be able to do all kinds of nasty stuff:

Those are perfectly reasonable things in the context of assistive software, because you need the computer to do stuff for you when you aren’t able to use a keyboard or a mouse/trackpad.

And you need the computer to read out text from other apps, or show choice buttons which you can trigger with your voice.


Technical content ahead. Click to skip this section if you’re not interested in macOS internals.

But for rcmd’s use case, we’re restricted to APIs that don’t require these permissions. APIs so old that 64-bit wasn’t even a thing when they launched and they require passing C function pointers instead of our beloved powerful Swift closures.

That’s the Carbon API and it goes a little something like this:

Registering a Command+R hotkey

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
// Install the key event handler
var pressedEventType = EventTypeSpec()
pressedEventType.eventClass = OSType(kEventClassKeyboard)
pressedEventType.eventKind = OSType(kEventHotKeyPressed)

InstallEventHandler(GetEventDispatcherTarget(), { _, inEvent, _ -> OSStatus in
    return handlePressedKeyboardEvent(inEvent!)
}, 1, &pressedEventType, nil, nil)


// Register the hotkey
let hotKeyId = EventHotKeyID(signature: UTGetOSTypeFromString("some-unique-identifier" as CFString), id: 0)
var carbonHotKey: EventHotKeyRef?

RegisterEventHotKey(UInt32(kVK_ANSI_R),
                    UInt32(cmdKey),
                    hotKeyId,
                    GetEventDispatcherTarget(),
                    0,
                    &carbonHotKey)

// Handle the event
func handlePressedKeyboardEvent(_ event: EventRef) -> OSStatus {
    assert(Int(GetEventClass(event)) == kEventClassKeyboard, "Unknown event class")

    var hotKeyId = EventHotKeyID()
    let error = GetEventParameter(event,
                                  EventParamName(kEventParamDirectObject),
                                  EventParamName(typeEventHotKeyID),
                                  nil,
                                  MemoryLayout<EventHotKeyID>.size,
                                  nil,
                                  &hotKeyId)

    guard error == noErr else { return error }
    assert(hotKeyId.signature == UTGetOSTypeFromString("some-unique-identifier" as CFString), "Invalid hot key id")

    switch GetEventKind(event) {
    case EventParamName(kEventHotKeyPressed):
        // do your thing.. eventually
    default:
        assert(false, "Unknown event kind")
    }
    return noErr
}

Not so pretty as the NSEvent method, but does the job. Kind of.

You see, that beautiful code macaroni above only lets us listen to Any Command + R, not specifically the Right Command. There’s no way to pass something like rightCmdKey into RegisterEventHotKey.

A workaround I found for this was:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
var rcmd = false

NSEvent.addGlobalMonitorForEvents(matching: .flagsChanged) { event in
    rcmd = event.modifierFlags.contains(.rightCommand)
}

func handleHotkey(key: String) {
    guard rcmd else { return }

    focusApp(with: key)
}

Doing this reminded me of the days I worked with Rust, and how wonderfully impossible a task like this would be. I don’t think I’m touching it again, I like my global atomic booleans.

Now the weirdest limitation hits me. There’s no way to discard a hotkey event and forward it back to the system so it can use it for the next handler.

Say I register Command C and I only want to do something when Right Command is held. If I do nothing when Left Command is held, then you can’t copy text anymore using Command C.

I tried returning the inappropriately named OSStatus(eventNotHandledErr) but the event still doesn’t return to the handler chain.

At this point we seriously considered dropping the App Store idea and just going the self publishing route.

But just thinking what we would have to do for that triggered something akin to PTSD.

Here’s a list with what I can remember off the top of my head from Lunar:

Finding yet another workaround seemed much easier.

Thankfully it really was easy. It turns out that RegisterEventHotKey is plenty fast. So fast that we were able to register the hotkeys only when Right Command was being held, and unregister them when the key was released.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
import Atomics

var _rcmd = ManagedAtomic<Bool>(false)
var rcmd: Bool {
    get { _rcmd.load(ordering: .relaxed) }
    set { _rcmd.store(newValue, ordering: .sequentiallyConsistent) }
}

NSEvent.addGlobalMonitorForEvents(matching: .flagsChanged) { event in
    rcmd = event.modifierFlags.contains(.rightCommand)

    if rcmd {
        registerHotkeys()
    } else {
        unregisterHotkeys()
    }
}

The App Store review

Now rcmd was ready for publishing on the App Store.

There was one little thing that bothered me though. I usually keep 4-5 separate projects open in Sublime Text, each with its own window. Because of the sandbox, there’s no way to get a list of windows for an app and, say, focus a specific one, or cycle between them.

But I found a little gem while I was customising my fork of yabai, a way to trigger Exposé for a single app:

1
CoreDockSendNotification("com.apple.expose.front.awake" as CFString)
app window expose screenshot
app window expose screenshot

We decided to show Exposé if for example you press rcmd s while Sublime Text is already focused. It was good enough for us.

Not for the App Store reviewer though.

app store review rejecting the expose feature
app store review rejecting the expose feature

I knew private and undocumented APIs are not seen well on the App Store. But I had no idea they will guarantee a rejection.

Free Trials

I like breaking the norm with my creations. Some of them will be flukes, some will be criticised into oblivion, but a small number of them might turn out to be something a lot of people wanted but didn’t know they needed.

rcmd is one of those things: a bit quirky, unique in its approach, and incredibly useful for a specific group of people.

That is also its weak point though. It’s hard to communicate this usefulness without being able to try the app first. But as it turns out, the App Store doesn’t provide any support for creating a free XX-day trial before buying an app.

Free trials for non-subscription apps have been allowed since mid-2018 on the App Store, and are supposed to be implemented using in-app purchases. Unfortunately, this approach has a lot of inconveniences which are very well detailed in this article: Ersatz Free Trials | Bitsplitting.org

These are the biggest shortcomings for my case:

I tried a few dozen apps on the App Store and I couldn’t find a single one offering a free trial for a non-subscription purchase using the above method.

Having to pay upfront is steering away a lot of possible users, but with all that bad UX, we decided to not implement any free trial and just sell the app for a one-time fair price.


3 years ago, I would have probably chosen to make the app open source and give it away for free, just like I did with Lunar.

I would have thought:

I’m making a ton of money at this company, what I would get by selling a small app would be peanuts anyway.

Only recently I realised that this approach kept me dependent on having a job where I click-clack useless programs 8 hours a day, only to get 1-2 hours after work for my projects, and sacrifice my health and sanity in the process.

In my whole 7-year career as a professional API Glue Technician and experienced Wheel Reinventer, I never did anything remotely as useful as even the simplest app I can code and publish in 2 months right now. At those companies, most of my work was scraped anyway when the redesign period of the year came.

So I’d rather have those peanuts please.


Everyone’s choosing to be left out

Now, with so many limitations, I think we can take a fair guess at why most indie developers choose to distribute their app outside the App Store.

Here are some of the apps I find most useful, and what I think is the main reason for them not being in the App Store:

Alfred Icon Alfred

The app’s main functionality (searching the filesystem) needs Full Disk Access permissions which are not allowed inside the sandbox.

It also uses Accessibility Permissions for auto-expanding snippets and other custom workflows.

BetterTouchTool Icon BetterTouchTool

Capturing and responding to all kinds of keyboard and trackpad events needs Accessibility Permissions.

The app also encapsulates the older BetterSnapTool utility for snapping windows to a grid. Resizing windows requires the same permissions.

Karabiner-Elements Icon Karabiner-Elements

Reacting to and changing keyboard input in realtime needs a special keyboard driver which is only allowed by Apple on a case by case basis. You have to request DriverKit entitlements from Apple, and they have to deem you worthy of those entitlements.

Needless to say, they won’t give hardware driver entitlements for a software app mimicking a keyboard.

Sublime Text Icon Sublime Text

Full Disk Access is probably the biggest requirement here.

Of course, there are other code editors on the App Store like BBEdit but they have this initial phase where you have to manually give them access to your / (root) directory.

bbedit sandbox access dialog
bbedit sandbox access dialog

bbedit allow access
bbedit allow access

Compared to Sublime Text’s launch and edit instantly first time experience, I feel this is a bit annoying. I’m pretty sure this confuses a lot of first time users, and they will probably blame the developer, not knowing that this is the only way to access files from the Sandbox.

Swish Icon Swish

Resizing windows, listening for global trackpad gestures, detecting titlebars, moving windows to other spaces/screens. All of these need Accessibility Permissions.

There’s even an FAQ for that on their page:

Why is Swish not on the App Store?

Apple only allows sandboxed apps on the App Store. Swish needs to perform low-level system operations which prevent it from being sandboxed. Read more here.

Sip Icon Sip

As outlined in their 2017 article, Moving from Mac App Store, the sandbox limitation is the primary reason

CleanShot X Icon CleanShot X

Their Screen Recording feature has three very useful functions:

Sketch Icon Sketch

Honestly, I’m not sure about this one. The App Store is full of image editors and graphic content creation tools.

I thing the unique pricing model is something they would have a hard time implementing on the App Store.

The unique pricing model of Sketch

sketch pricing model
sketch pricing model

Parallels Desktop Icon Parallels Desktop

They actually have an App Store edition, but it’s severely limited.

Sharing things between the host and the VM is probably the largest functionality affected by the sandbox.

They provide a table with everything that’s missing in their App Store version of the app: KB Parallels: What is the difference between Parallels Desktop App Store Edition and Standard Edition?

Lunar Icon Lunar

Low-level communication with monitors is only possible by using a lot of private and reverse engineered APIs (IOKit, DisplayServices, IOAVService etc.)

Accessibility Permissions are also needed for listening and reacting to brightness and volume key events.

Because of the sandbox, the lite App Store version of Lunar only supports software dimming and can only react to F1/F2 keys.

Soulver Icon Soulver

I think the free trial limitation is the only thing keeping such a self-contained app outside the App Store.

Soulver is incredibly complex and useful in its functionality, but I don’t think too many people would splurge $35 on a notepad-calculator app without trying it first. It deserves every single dollar of that price, that I can say for sure.

The journey to controlling external monitors on M1 Macs

2021-07-16 23:39:37

One lazy evening in November 2020, I watched how Tim Cook announced a fanless MacBook Air with a CPU faster than the latest 16 inch MacBook, while my work-provided 15 inch 2019 MacBook Pro was slowly frying my lap and annoying my wife with its constant fan noise.

I had to get my hands on that machine. I also had the excuse that users of my app couldn’t control their monitor brightness anymore, so I could justify the expense easily in my head.

So I got it! With long delays and convoluted delivery schemes because living in a country like Romania means incredibly high prices on everything Apple.

This already starts to sound like those happy stories about seeing how awesome M1 is, but it’s far from that.

This is a story about how getting an M1 made me quit my job, bang my head against numerous walls to figure out monitor support for it and turn an open source app into something that I can really live off without needing a “real job”.

Adjusting monitor brightness on Intel Macs

I develop an app called Lunar that can adjust the real brightness, contrast and volume of monitors by sending DDC commands through the Mac GPU.

screenshot of the Lunar app Display Settings
screenshot of the Lunar app Display Settings

On Intel Macs this worked really well because macOS had some private APIs to find the framebuffer of a monitor, send data to it through I²C, and best of all, someone has already done the hard part in figuring this out in this ddcctl utility.

M1 Macs came with a different kernel, very similar to the iOS one. The previous APIs weren’t working anymore on the M1 GPU, the IOFramebuffer was now an IOMobileFramebuffer and the IOI2C* functions weren’t doing anything.

All of a sudden, I was getting countless emails, Twitter DMs and GitHub issues about how Lunar doesn’t work anymore on macOS Big Sur (most M1 users were thinking the OS upgrade was causing this, disregarding the fact that they’re now using hardware and firmware that was never before seen on the Mac)

This was also a reality check for me. Without analytics, I had no idea that Lunar had so many active users!

Constructing the DDC request
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#define BRIGHTNESS_CONTROL_ID 0x10
UInt8 brightness = 75;  // 75% brightness

IOI2CRequest request;
bzero(&request, sizeof(request));
request.commFlags = 0;

request.sendAddress = 0x6E;
request.sendTransactionType = kIOI2CSimpleTransactionType;
request.sendBytes = 7;

UInt8 data[256];
request.sendBuffer = (vm_address_t)&data[0];

data[0] = 0x51;
data[1] = 0x84;
data[2] = 0x03;
data[3] = BRIGHTNESS_CONTROL_ID;
data[4] = brightness >> 8;
data[5] = brightness & 255;
data[6] = 0x6E ^ data[0] ^ data[1] ^ data[2] ^ data[3] ^ data[4] ^ data[5];

request.replyTransactionType = kIOI2CNoTransactionType;
request.replyBytes = 0;
Sending the data through I²C
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
io_service_t framebuffer = 0;
CGSServiceForDisplayNumber(displayID, &framebuffer);

io_service_t interface;
if (IOFBCopyI2CInterfaceForBus(framebuffer, bus++, &interface) != KERN_SUCCESS)
    return;

IOI2CConnectRef connect;
if (IOI2CInterfaceOpen(interface, kNilOptions, &connect) == KERN_SUCCESS) {
    IOI2CSendRequest(connect, kNilOptions, request);
    IOI2CInterfaceClose(connect, kNilOptions);
}
IOObjectRelease(interface);

Hands-on with the M1

It was the last day of November. Winter was already coming. Days were cold and less than 10km away from my place you could take a walk through snowy forests.

snowy forests in Răcădău (Braşov, Romania)

snowy forests in Brasov, Romania
snowy forests in Brasov, Romania

But I was fortunate, as I had my trusty 2019 MacBook Pro to keep my hands warm while I was cranking code that will be obsolete in less than 6 months on my day job.

Just as the day turned into evening, the delivery guy called me about a laptop: the custom configured M1 MacBook Pro that costed as much as 7 junior developer monthly salaries has arrived!

After charging the laptop to 100%, I started the installation of my enormous Brewfile and left it on battery as an experiment. Meanwhile I kept working on the 2019 MacBook because my day job was also a night job when deadlines got tight.

Before I went to sleep, I wanted to test Lunar just to get an idea of what happens on M1. I launched it through Rosetta and the app window showed up as expected, every UI interaction worked normally but DDC was unresponsive. The monitor wasn’t being controlled in any way. I just hoped this was an easy fix and headed to bed.

Workarounds

So it turns out the I/O structure is very different on M1 (more similar to iPhones and iPads than to previous Macs). There’s no IOFramebuffer that we can call IOFBCopyI2CInterfaceForBus on. There’s now an IOMobileFramebuffer in its place that has no equivalent function for getting an I²C bus from it.

After days of sifting through the I/O Registry trying to find a way to send I²C data to the monitor, I gave up and tried to find a workaround.

I realized I couldn’t work without Lunar being functional. I went back to doing the ritual I had to do in the first days I got my monitor and had no idea about DDC:

Gamma Tables

One specific comment was becoming prevalent among Lunar users:

QuickShade works for me on M1. Why can’t Lunar work?

QuickShade uses a black overlay with adjustable opacity to lower the image brightness. It can work on any Mac because it doesn’t depend on some private APIs to change the brightness of the monitor.

it also makes colors look more washed out in low brightness

Actually, unlike Lunar, QuickShade doesn’t change the monitor brightness at all.

QuickShade simulates a lower brightness by darkening the image using a fullscreen click-through black window that changes its opacity based on the brightness slider. The LED backlight of the monitor and the brightness value in its OSD stay the same.

This is by no means a bad critique of QuickShade. It is a simple utility that does its job very well. Some people don’t even notice the difference between an overlay and real brightness adjustments that much so QuickShade might be a better choice for them.

LED monitor basic structure

LED panel structure
LED panel structure

I thought, that isn’t what Lunar set out to do, simulating brightness that is. But at the same time, a lot of users depend on this app and if it could at least do that, people will be just a bit happier.

So I started researching how the brightness of an image is perceived by the human eye, and read way too much content about the Gamma factor.
Here’s a very good article about the subject: What every coder should know about Gamma

I noticed that macOS has a very simple way to control the Gamma parameters so I said why not?. Let’s try to implement brightness and contrast approximation using Gamma table:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
let minGamma = 0.0
let gamma = mapNumber(
    brightnessPercent,
    fromLow: 0.0, fromHigh: 1.0,
    toLow: 0.3, toHigh: 1.0
)
let contrast = mapNumber(
    powf(contrastPercent, 0.3),
    fromLow: 0, fromHigh: 1.0,
    toLow: -0.2, toHigh: 0.2
)

CGSetDisplayTransferByFormula(
    displayID,
    minGamma, gamma, gamma + contrast,  // red gamma
    minGamma, gamma, gamma + contrast,  // green gamma
    minGamma, gamma, gamma + contrast   // blue gamma
)

Of course this needed weeks of refactoring because the app was not designed to support multiple ways of setting brightness (as it usually happens in every single-person hacked up project).

And there were so many unexpected issues, like, why does it take more than 5 seconds to apply the gamma values?? ლ(╹◡╹ლ)

It seems that the gamma changes become visible only on the next redraw of the screen. And since I was using the builtin display of the MacBook to write the code and the monitor was just for observing brightness changes, it only updated when I became too impatient and moved my cursor to the monitor in anger.

Now how do I force a screen redraw to make the gamma change apply instantly? (and maybe even transition smoothly between brightness values)

Just draw something on the screen ¯\_(ツ)_/¯

I chose to draw a (mostly hidden) blinking yellow dot when a gamma transition happens, to force screen redraw.


The Raspberry Pi idea

Now I was prepared to release a new version of Lunar with the Gamma approximation thing as a fallback for M1. But as it happens, one specific user sends me an email of how he managed to change the brightness of his monitor from a Raspberry Pi connected to the HDMI input, while the active input was still set to the MacBook’s USB-C.

I have already explored this idea as I have numerous Pis laying around, but I couldn’t get it working at all. I started writing a condescending reply of how I already tried this and how it will never work and he probably just has a monitor that happens to support this and won’t apply for other users.

But then… I realized what I was doing and started pressing backspace backspace backspace… and all the while I was remembering how the best features of Lunar were in fact ideas sent by users and I should stop thinking that I know better.

Instead, I started asking questions:

I probably asked the right questions because the reply was exactly what I needed to get this working right away.
After 30 minutes of downloading the latest Raspberry Pi OS with full desktop environment, flashing it, updating to a beta firmware version, and setting the right values in /boot/config.txt, the Pi was able to send DDC requests using ddcutil while the monitor was rendering the MacBook desktop.

I couldn’t let this slip away, so I started implementing a network based DDC control for the next version of Lunar:

I established from the start that the local network latency and HTTP overhead was negligible compared to the DDC delay so I didn’t have to look into more complex solutions like USB serial, websockets or MQTT.

The dreaded day job

Even though side project is such a praised thing in the software development world, I can’t recommend doing such a thing.

It was very hard doing all of the above in the little time I had after working 9+ hours fullstack at an US company (that was also going through 2 different transitions: bought by a conglomerate, merging with another startup).

I owe a lot to my manager there, I wouldn’t have had the strength to do what followed without his encouraging advice and always present genuine smile.

One day, he told me that he finally started working on a bugfix for a long-standing problem in our gRPC gateway. He confessed that it was the first time in two months he found the time to write some code (the thing he actually enjoyed), between all the meetings and video calls. 10 minutes later, another non-US based team needed his help and his coding time got filled with scheduled meetings yet again. That is the life of a technical manager.

Now that Lunar was working on M1 and the Buy me a Coffee donations showed that people find value in this app, I thought it was time to stop doing what I don’t like (working for companies on products that I never use) and start doing what I always seemed to like (creating software which I also enjoy using, and share it with others).

So on April 1st I finished my contract at the US company, and started implementing a licensing system in Lunar.

Sounds simple right? Well it’s far from that. Preparing a product for the purpose of selling it, took me two whole months. And more energy than I put in 4 months of experimenting with Gamma and DDC on M1 (yeah, that was the fun part). This part of the journey is the hardest, and not fun at all.

My take from this is: if you’re at the start of selling your work, choose a payment or licensing solution that requires the least amount of work, no matter how expensive it may seem at first.

I went with Paddle for Lunar because of the following reasons:

Even with that, I made the mistake to choose a licensing system that wasn’t natively supported by Paddle and that made me dig into a 2-month rabbit hole of licensing servers.

I wanted the system that Sketch has: a one-time payment for an unlimited license, that also includes 1 year of free updates.

Sketch app licensing
Sketch app licensing

I²C on M1

After a successful launch in June, most users were happy with the Gamma solution, and some even tried the Raspberry Pi method: Lunar.app - a way for M1 Macs to control 3rd Party Monitor’s Brightness and Contrast - Hardware - MPU Talk

Although one user was still persistent in looking for I²C support. Twice he tried to bring to my attention a way to use I²C on M1 and the second time he finally succeeded.

@zhuowei Github comment
@zhuowei Github comment

His GitHub comment on the M1 issue for Lunar sparked a new hope among users and some of the more technical users started experimenting with the
IOAVServiceReadI2C and IOAVServiceWriteI2C functions.

Because of my shallow understanding of the DDC specification at the time, I couldn’t get a working proof of concept in the first few tries.

I didn’t know exactly what chipAddress and dataAddress were for

1
2
3
4
5
6
7
IOAVServiceWriteI2C(
  IOAVServiceRef service,
  uint32_t chipAddress,
  uint32_t dataAddress,
  void* inputBuffer,
  uint32_t inputBufferSize
)

I knew from my experiments with ESP32 and Arduino boards that I²C is in fact a serial bus, which means you can communicate with more than one device from the same 2 pins of the main device by chaining the secondary devices.

That possibility brings the requirement of a chip address which the main device should send over the wire to reach a specific device from that chain.

chaining sensor boards through I²C

I²C chain of sensors
I²C chain of sensors

In the DDC standard, the secondary device is the monitor and has the chip address 0x37.

The EDID chip is located at the address 0x50 which is what we have in @zhuowei’s EDID reading example

1
IOAVServiceReadI2C(avService, 0x50, 0x0, i2cBytes, sizeof(i2cBytes));

But then what is the dataAddress?

No idea, but thankfully someone reverse engineered the communication protocol and found this to always be 0x51.

After some trial and error, user @tao-j discovered the above details and managed to finally change the brightness from his M1 MacBook.

The Mac Mini problem

Unfortunately, this was just the beginning as the Mac Mini supports more than one monitor and it’s not clear what monitor we’re controlling when calling IOAVServiceCreate().

I found a way to get each monitor’s specific AVService by iterating the I/O Kit registry tree and looking for the AppleCLCD2 class. To know which AppleCLCD2 belonged to what monitor, I had to cross reference identification data returned by CoreDisplay_DisplayCreateInfoDictionary with the attributes of the registry node.

With that convoluted logic, I managed to get DDC working on Mac Mini as well, but only on the Thunderbolt 3 port. The HDMI port still doesn’t work for DDC, and no one knows why.

In the end, DDC on M1 was finally working in the same way it worked on Intel Macs!

Sending I²C data on M1

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
#define BRIGHTNESS 0x10
IOAVServiceRef avService = IOAVServiceCreate(kCFAllocatorDefault);

IOReturn err;
UInt8 data[256];
memset(data, 0, sizeof(data));

UInt8 brightness = 70;

data[0] = 0x84;
data[1] = 0x03;
data[2] = BRIGHTNESS;
data[3] = brightness >> 8;
data[4] = brightness & 255;
data[5] = 0x6E ^ 0x51 ^ data[0] ^ data[1] ^ data[2] ^ data[3] ^ data[4];

IOAVServiceWriteI2C(avService, 0x37, 0x51, data, 6);

Some quirks are still bothering the users of Lunar though:

For the moment these seem to be hardware problems and I’ll just have to keep responding to the early morning support emails no matter how obvious I make it that these are unsolvable.

Technical stuff

I left these at the end because the details may bore most people but they might still be useful for a very small number of readers.

How is an app able to change the hardware brightness of a monitor?

All monitors have a powerful microprocessor inside that has the purpose of receiving video data over multiple types of connections, and creating images from that data through the incredibly tiny crystals of the panel.

That same microprocessor dims or brightens a panel of LEDs behind that panel of crystals based on the Brightness value that you can change in the monitor settings using its physical buttons.

Because the devices that connect to the monitor need to know stuff about its capabilities (e.g. resolution, color profile etc), there needs to be a language known by both the computer and the monitor so that they can communicate.

That language is called a communication protocol. The protocol implemented inside the processors of most monitors is called Display Data Channel or DDC for short.

To allow for different monitor properties to be read or changed from the host device, VESA created the Monitor Control Command Set (or MCCS for short) which works over DDC.

MCCS is what allows Lunar and other apps to change the monitor brightness, contrast, volume, input etc.

Then what the heck is I²C?

I²C is a Wire protocol, which basically specifies how to translate electrical pulses sent over two wires into bits of information.

DDC specifies which sequences of bits are valid, while I²C specifies how a device like the monitor microprocessor can get those bits through wires inside the HDMI, DisplayPort, USB-C etc. cables.

Why does macOS block me from changing volume on the monitor, while Windows allows that?

volume lock macOS OSD

macOS doesn’t block volume, it simply doesn’t implement any way for you to change the volume of a monitor.

Windows actually only changes the software volume, so if your monitor real volume is at 50%, windows can only lower that in software so you’ll hear anything between 0% and 50%. If you check the monitor OSD, you’ll see that the volume value of the monitor always stays at 50%.

Now macOS could probably do that as well, so that at least we’d have a way to lower the volume. But it doesn’t.

So if you want to change the real volume of the monitor on Mac, Lunar can do that.